Utilitarianism (Happiness) Framework (Sacred Heart AT)

The value criterion is...maximizing happiness.

Here are some of the best justifications for a utilitarianism (happiness) framework.


  

The standard is maximizing happiness. Reasons to prefer: 

First, revisionary intuitionism

Revisionary intuitionism is true and leads to util.

Yudkowsky 8 writes

Eliezer Yudkowsky (research fellow of the Machine Intelligence Research Institute; he also writes Harry Potter fan fiction). “The ‘Intuitions’ Behind ‘Utilitarianism.’” 28 January 2008. LessWrong. http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/

I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet. I used to be very confused about metaethics. After my confusion finally cleared up, I did a postmortem on my previous thoughts. I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless. And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad". Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can. Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level. Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition". He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to. Now "intuition" is not how I would describe the computations that underlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. But I am okay with using the word "intuition" as a term of art, bearing in mind that "intuition" in this sense is not to be contrasted to reason, but is, rather, the cognitive building block out of which both long verbal arguments and fast perceptual arguments are constructed. I see the project of morality as a project of renormalizing intuition. We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles. Delete all the intuitions, and you aren't left with an ideal philosopher of perfect emptiness, you're left with a rock. Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren't left with an ideal philosopher of perfect spontaneity and genuineness, you're left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies. "Intuition", as a term of art, is not a curse word when it comes to morality - there is nothing else to argue from. Even modus ponens is an "intuition" in this sense - it's just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera. So that is "intuition". However, Gowder did not say what he meant by "utilitarianism". Does utilitarianism say... That right actions are strictly determined by good consequences? That praiseworthy actions depend on justifiable expectations of good consequences? That probabilities of consequences should normatively be discounted by their probability, so that a 50% probability of something bad should weigh exactly half as much in our tradeoffs? That virtuous actions always correspond to maximizing expected utility under some utility function? That two harmful events are worse than one? That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one? That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B? If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy. Now, what are the "intuitions" upon which my "utilitarianism" depends? This is a deepish sort of topic, but I'll take a quick stab at it. First of all, it's not just that someone presented me with a list of statements like those above, and I decided which ones sounded "intuitive". Among other things, if you try to violate "utilitarianism", you run into paradoxes, contradictions, circular preferences, and other things that aren't symptoms of moral wrongness so much as moral incoherence. After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out. This does not quite define moral progress, but it is how we experience moral progress. As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there? (Could that be what Gowder means by saying I'm "utilitarian"?) The question of where a road goes - where it leads - you can answer by traveling the road and finding out. If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner. When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth. You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error). But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place. Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up. After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions. When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing... Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic: Other recent research shows similar results. Two Israeli psychologists asked people to contribute to a costly life-saving treatment. They could offer that contribution to a group of eight sick children, or to an individual child selected from the group. The target amount needed to save the child (or children) was the same in both cases. Contributions to individual group members far outweighed the contributions to the entire group. There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact. If you know the general experimental paradigm, then the reason for the above behavior is pretty obvious - focusing your attention on a single child creates more emotional arousal than trying to distribute attention around eight children simultaneously. So people are willing to pay more to help one child than to help eight. Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune. But what about the billions of other children in the world? Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down? How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417? Or you could look at that and say: "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with. But it ought to, normatively speaking." And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever. You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities. It's just that the brain doesn't goddamn multiply. Quantities get thrown out the window. If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives. Likewise if such choices are made by 10 different people, rather than the same person. As soon as you start believing that it is better to save 50,000 lives than 25,000 lives, that simple preference of final destinations has implications for the choice of paths, when you consider five different events that save 5,000 lives. (It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways. But the long run is a helpful intuition pump, so I am talking about it anyway.) The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time. Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe. Three lives are one and one and one. No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation. And if you add another life you get 4 = 1 + 1 + 1 + 1. That's aggregation. When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty. It just goes to show that the brain doesn't goddamn multiply. The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency. So you reflect, devise more trustworthy logics, and think it through in words. When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities. Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering". And part of it has to do with preferring unconditional social rules to conditional social rules. Conditional rules seem weaker, seem more subject to manipulation. If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole. So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it. Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life". Though the latter choice, of course, is revealed every time we sneeze without calling a doctor. The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise. So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect. On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule. But you don't conclude that there are actually two tiers of utility with lexical ordering. You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity. You don't conclude that utilities must be expressed using hyper-real numbers. Because the lower tier would simply vanish in any equation. It would never be worth the tiniest effort to recalculate for it. All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority. As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision. Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility. I don't say that morality should always be simple. I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know.  But that's for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey. When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply." Where music is concerned, I care about the journey. When lives are at stake, I shut up and multiply. It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the optimal path to that destination is governed by laws that are simple, because they are math. And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

  

 2 impacts.

a. “Util is unintuitive” isn’t a response—non-utilitarian intuitions are cognitively biased

b. The neg must link to revisionary intuitionism or else they lose the framework debate—revisionary intuitionism is preferred to meta-ethical questions for productive moral discussions.

Second, moral substitutability 

The principle of moral substitutability is correct

Sinnot-Armstrong 92 writes

Walter Sinnot-Armstrong (Dartmouth College).. “An Argument for Consequentialism.” Philosophical Perspectives. 1992.

I have a moral reason to feed my child tonight, both because I promised my wife to do so, and also because of my special relation to my child along with the fact that she will go hungry if I don't feed her. I can't feed my child tonight without going home soon, and going home soon will enable me to feed her tonight. Therefore, there is a moral reason for me to go home soon. It need not be imprudent or ugly or sacrilegious or illegal for me not to feed her, but the requirements of morality give me a moral reason to feed her. This argument assumes a special case of substitutability: (MS) If there is a moral reason for A to do X, and if A cannot do X without doing Y, and if doing Y will enable A to do X, then there is a moral reason for A to do Y. I will call this 'the principle of moral substitutability', or just 'moral substitutability'. This principle is confirmed by moral reasons with negative structures. I have a moral reason to help a friend this afternoon. I cannot do so if I play golf this afternoon. Not playing golf this afternoon will enable me to help my friend. So I have a moral reason not to play golf this afternoon. Similarly, I have a moral reason not to endanger other drivers (beyond acceptable limits). I can't drink too much before I drive without endangering other drivers. Not drinking too much will enable me to avoid endangering other drivers. Therefore, I have a moral reason not to drink too much before I drive. The validity of such varied arguments confirms moral substitutability. We can also extend the above theory of reasons. Since a reason for action is a fact that can affect the rationality of an act, a moral reason is a fact that can affect the morality of an act, either by making an otherwise morally neutral act morally good or by making an otherwise immoral act moral. As above, a moral reason need not be strong enough to make its act moral in every case as long as it has that ability in some cases. For example, if I promised to meet a needy student later this afternoon, it is immoral for me to go home now if I have no morally relevant reason to go. Nonetheless, it is not immoral for me to go home now if this is necessary and enables me to feed my child when I have a moral reason to feed her. Thus, this fact about going home now can make an otherwise immoral act moral, so this fact is a moral reason. This supports moral substitutability. When there is a moral reason for me to feed my child, and going home now is necessary and enables me to feed my child, this fact makes it moral for me to go home now even in a situation where this would otherwise be immoral, so this fact is a moral reason for me to go home now. Thus, the ability to make immoral acts moral transfers from acts to their necessary enablers, just as moral substitutability claims. 

{C}{C}{C}{C}{C}{C}

Deon can’t coherently explain the principle of moral substitutability

Sinnot-Armstrong 92 writes

Walter Sinnot-Armstrong (Dartmouth College).. “An Argument for Consequentialism.” Philosophical Perspectives. 1992.

{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}

Deontologists might try to defend the claim that moral reasons are based on promises by claiming that promise keeping is intrinsically good and there is a moral reason to do what is a necessary enabler of what is intrinsically good. However, this response runs into two problems. First, on this theory, the reason to keep a promise is a reason to do what is itself intrinsically good, but the reason to start the mower is not a reason to do what is intrinsically good. Since these reasons are so different, they are derived in different ways. This creates an incoherence or lack of unity which is avoided in other theories. Second, this response conflicts with a basic theme in deontological theories. If my promise keeping is intrinsically good, your promise keeping is just as intrinsically good. But then, if what gives me a moral reason to keep my promise is that I have a moral reason to do whatever is intrinsically good, I have just as much moral reason to do what is a necessary enabler for you to keep your promise. And, if my breaking my promise is a necessary enabler for two other people to keep their promises, then my moral reason to break my promise is stronger than my moral reason to keep it (other things being equal). This undermines the basic deontological claim that my reasons derive in a special way from my promises.13 So this response explains moral sub- stitutability at the expense of giving up deontology. A fourth possible response is that any reason to mow the grass is also a reason to start my mower because starting my mower is part of mowing the grass. However, starting my mower is not part of mowing the grass, because I can start my mower without cutting any grass. I might start my mower hours  412 / Walter Sinnott-Armstrong in advance and never get around to cutting any grass. Suppose I start the mower then go inside and watch television. My wife comes in and asks, 'Have you started to mow the lawn?', so I answer, 'Yes. I've done part of it. I'll finish it later.' This is not only misleading but false. Furthermore, mowing the grass can have other necessary conditions, such as buying a mower or leaving my chair, which are not parts of mowing the grass by any stretch of the imagination. Finally, deontologists might charge that my argument begs the question. It would beg the question to assume moral substitutability if this principle were inconsistent with deontological theories. However, my point is not that moral substitutability is inconsistent with deontology. It is not. Deontologists can consistently tack moral substitutability onto their theories. My point is only that deontologists cannot explain why moral substitutability holds. It would still beg the question to assert moral substitutability without argument. However, I did argue for moral substitutability, and my argument was independent of its implications for deontology. I even used examples of moral reasons that are typical of deontological theories. Deontologists still might complain that the failure of so many theories to explain moral substitutability casts new doubt on this principle. However, we normally should not reject a scientific observation just because our theory-cannot explain it. Similarly, we normally should not reject an otherwise plausible moral judgment just because our favorite theory cannot explain why it is true. Otherwise, no inference to the best explanation could work. My argument simply extends this general explanatory burden to principles of moral reasoning and shows that deontological theories cannot carry that burden. 

{C}{C}{C}{C}{C}

Consequentialism is key

Sinnot-Armstrong 92 writes

Walter Sinnot-Armstrong (Dartmouth College).. “An Argument for Consequentialism.” Philosophical Perspectives. 1992.

{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}{C}

The crucial advantage of NEC lies in its unity. Other theories claim that my reason to do what I promised is just that this fulfills my promise or that promise keeping is intrinsically good. However, I did not promise to start the mower, and starting the mower is not intrinsically good. Thus, my reason to start the mower derives from a different property than my reason to keep my promise. In contrast, NEC makes my reasons to keep my promise, to mow the lawn, and to start the mower derive from the very same property: being a necessary enabler of preventing harm or promoting good. This makes NEC's explanation more coherent and better. A critic might complain that NEC just postpones the problem, since NEC will eventually need to explain why certain things are good or bad, and some will be good or bad as means, but others will not. However, if what is good or bad intrinsically are states (such as pleasure and freedom or pain and death) rather than acts, then they are not the kind of thing that can be done, so there cannot be any question of a reason to do them. This makes it possible for all reasons for acts to have the same nature or derive from the same property. NEC will still have to explain why certain states are good or bad, but so will every other moral theory. The difference is that other theories will also have to explain why there are two kinds of reasons for acts and how these reasons are connected. This is what other theories cannot explain. This additional explanatory gap is avoided by the unified nature of reasons in [necessary enabler consequentialism] NEC

Third, human worth

Respect for human worth would justify util. Cummiskey 90

Cummiskey, David. Associate professor of philosophy at the University of Chicago. “Kantian Consequentiaism.” Ethics 100 (April 1990), University of Chicago. http://www.jstor.org/stable/2381810

We must not obscure the issue by characterizing this type of case as the sacrifice of individuals for some abstract “social entity.” It is not a question of some persons having to bear the cost for some elusive “overall social good.” Instead, the question is whether some persons must bear the inescapable cost for the sake of other persons. Robert Nozick, for example, argues that “to use a person in this way does not sufficiently respect and take account of the fact that he is a separate person, that his is the only life he has.” But why is this not equally true of all those whom we do not save through our failure to act? By emphasizing solely the one who must bear the cost if we act, we fail to sufficiently respect and take account of the many other separate persons, each with only one life, who will bear the cost of our inaction. In such a situation, what would a conscientious Kantian agent, an agent motivated by the unconditional value of rational beings, choose? A morally good agent recognizes that the basis of all particular duties is the principle that “rational nature exists as an end in itself”. Rational nature as such is the supreme objective end of all conduct. If one truly believes that all rational beings have an equal value, then the rational solution to such a dilemma involves maximally promoting the lives and liberties of as many rational beings as possible. In order to avoid this conclusion, the non-consequentialist Kantian needs to justify agent-centered constraints. As we saw in chapter 1, however, even most Kantian deontologists recognize that agent-centered constraints require a non- value-based rationale. But we have seen that Kant’s normative theory is based on an unconditionally valuable end. How can a concern for the value of rational beings lead to a refusal to sacrifice rational beings even when this would prevent other more extensive losses of rational beings? If the moral law is based on the value of rational beings and their ends, then what is the rationale for prohibiting a moral agent from maximally promoting these two tiers of value? If I sacrifice some for the sake of others, I do not use them arbitrarily, and I do not deny the unconditional value of rational beings. Persons may have “dignity, that is, an unconditional and incomparable worth” that transcends any market value, but persons also have a fundamental equality that dictates that some must sometimes give way for the sake of others. The concept of the end-in-itself does not support the view that we may never force another to bear some cost in order to benefit others.

Fourth, util is the only moral system available to policy-makers.  Goodin 90 writes

Robert Goodin, fellow in philosophy, Australian National Defense University, THE UTILITARIAN RESPONSE, 1990, p. 141-2

My larger argument turns on the proposition that there is something special about the situation of public officials that makes utilitarianism more probable for them than private individuals. Before proceeding with the large argument, I must therefore say what it is that makes it so special about public officials and their situations that make it both more necessary and more desirable for them to adopt a more credible form of utilitarianism. Consider, first, the argument from necessity. Public officials are obliged to make their choices under uncertainty, and uncertainty of a very special sort at that. All choices – public and private alike – are made under some degree of uncertainty, of course. But in the nature of things, private individuals will usually have more complete information on the peculiarities of their own circumstances and on the ramifications that alternative possible choices might have for them. Public officials, in contrast, [they] are relatively poorly informed as to the effects that their choices will have on individuals, one by one. What they typically do know are generalities: averages and aggregates. They know what will happen most often to most people as a result of their various possible choices, but that is all. That is enough to allow[s] public policy-makers to use the utilitarian calculus – assuming they want to use it at all – to chose general rules or conduct.

Prefer state-specific justifications because of actor-specificity—most contextual to the resolutional actor.

And fifth, Knowledge reduces to merely true belief. “Reliable” sources of justification aren’t necessary for the goal of epistemology, or finding true beliefs. Sartwell 91

Crispin Sartwell. Knowledge Without Justification. 1991. http://www.crispinsartwell.com/know.htm

It is widely held that our epistemic goal with regard to particular propositions is achieving true beliefs and avoiding false ones about propositions with which we are epistemically concerned. (We have seen that Alston, for one, endorses that view.) That is, it is widely admitted that on any good account of justification, there must be reason to think that the beliefs justified on the account are likely to be true. Indeed, proponents of all the major conceptions of justification hold this position. For example, the foundationalist Paul Moser writes:   [E]pistemic justification is essentially related to the so-called cognitive goal of truth, insofar as an individual belief is epistemically justified only if it is appropriately directed toward the goal of truth. More specifically, on the present conception, one is epistemically justified in believing a proposition only if one has good reason to believe it is true.(22)     The reliabilist Alvin Goldman claims, similarly, that a condition on an account of justification is that beliefs justified on the account be likely to be true; he says that a plausible conception of justification will be "truth-linked."(23) And the coherentist Laurence BonJour puts it even more strongly:   If epistemic justification were not conducive to truth in this way, if finding epistemically justified beliefs did not substantially increase the likelihood of finding true ones, epistemic justification would be irrelevant to our main cognitive goal and of dubious worth. It is only if we have some reason to think that epistemic justification constitutes a path to truth that we as cognitive human beings have any motive for preferring epistemically justified beliefs to epistemically unjustified ones. Epistemic justification is therefore in the final analysis only an instrumental value, not an intrinsic one.(24)     In fact, it is often enough taken to be the distinguishing mark of the fact that we are epistemically concerned with a proposition that we are concerned with its truth or falsity. That is what, on the view of many philosophers, distinguishes epistemic from moral or prudential constraints on belief, what distinguishes inquiry from other belief-generating procedures. (If the theory I gave in the first chapter is right, ther are no non-epistemic belief-generating procedures in this sense. That fact merely underscores the present point.) I have argued that a plausible normative epistemology will be teleological. And I have claimed that the conception which accounts of knowledge are attempting to analyze or describe is that of the epistemic telos with regard to particular propositions. It would follow that, if a philosopher holds that the epistemic telos is merely true belief, that philosopher implicitly commits himself, his own asservations to the contrary, to the view that knowledge is merely true belief.  I think that this is the case. I think, that is, that in the above passages, these philosophers have committed themselves implicitly to the view that knowledge is merely true belief, and that justification is a criterion rather than a logically necessary condition of knowledge. By a criterion, to repeat, I mean a test for whether some item has some property that is not itself a logically necessary condition of that item's having that property. Justification on the present view is, first of all, a means by which we achieve knowledge, that is, by which we arrive at true beliefs, and second, it provides a test of whether someone has knowledge, that is, whether her beliefs are true. So again, the present view does not make accounts of justification trivial, or unconnected with the assessment of claims to know. If our epistemic goal with regard to particular propositions is true belief, then justification (a) gives procedures by which true beliefs are obtained, and (b) gives standards for evaluating the products of such procedures with regard to that goal. From the point of view of (a), justification prescribes techniques by which knowledge is gained. From the point of view of (b) it gives a criterion for knowledge. But in neither case does it describe a logically necessary condition for knowledge.  Another way of putting the matter is like this. If we describe justification as of merely instrumental value with regard to arriving at truth, as BonJour does explicitly, we can no longer maintain both that knowledge is the telos of inquiry and that justification is a necessary condition of knowledge. It is incoherent to build a specification of what are regarded merely as means of achieving some goal into the description of the goal itself; in such circumstances the goal can be described independently of the means. So if justification is demanded because it is instrumental to true belief, it cannot also be maintained that knowledge is justified true belief.  I will now certainly be accused of begging the question by assuming that knowledge is the goal of inquiry. There is justice in this claim in that I have not gone very far toward establishing the point. But I would ask my accusers at this point whether they can do better in describing the conception which theories of knowledge set out to analyze or describe without begging the question in favor of some such theory. And I ask also, if knowledge is not the overarching epistemic telos with regard to particular propositions, why such tremendous emphasis has been placed on the theory of knowledge in the history of philosophy, and just what function that notion serves within that history. If knowledge is not the overarching purpose of inquiry, then why is the notion important, and why should we continue to be concerned in normative epistemology above all with what knowledge is and how it can be achieved? If we want to withold the term `knowledge' from mere true belief, but also want to hold that mere true belief is the purpose of inquiry, then I suggest that what remains is a mere verbal dispute. That is, if we treat mere true belief as the purpose of inquiry, but do not equate it with knowledge, then I do not think that knowledge is any longer central to normative epistemology. And I would insist that we are not going to understand what `knowledge' means in the tradition, in Plato and Descartes, for example, if we do not regard them as holding knowledge to be the goal of inquiry. In fact, if it is allowed that mere true belief is the telos of inquiry, but that we should still reserve the term `knowledge' for justified true belief (and perhaps something more), I will simply abandon the term `knowledge' to the epistemology of justification. But first of all, as I suggested in the third chapter, I think that `knowledge' will now merely be a technical term with a stipulated definition. And second, I do not think it will be central to epistemology, since it no longer represents our epistemic goal. And third, I think the stipulated definition will either be redundant (if justification is held to be truth conducive) or, as I will argue, incoherent (if it is not).  Now it may well be held that justification is of more than instrumental value, because if we are not justified in believing p, though p is true and we in fact believe it, we may have false beliefs that lead us to p, and we may continue to generate false beliefs in the future. All of this is true, but it is irrelevant to the present point. Recall that I have characterized knowledge as our epistemic goal with regard to particular propositions. Insofar as p is concerned, this goal has been realized if p is true and we believe it. Insofar as we have also such goals as continuing to generate true beliefs, rendering our system of beliefs coherent, and so forth, it is desirable to have justified beliefs. But with regard to any particular proposition, our goal has been reached if we believe that proposition and it is true.  But I do not want simply to let the matter rest on a supposed agreement among some contemporary epistemologists that our epistemic goal with regard to particular propositions is true belief. Such epistemologists are agreed that knowledge is at least justified true belief. I think that Alston is right to think that the only plausible way to construe this claim is that knowledge is at least true belief based on adequate grounds, or true belief reached from a strong position. So perhaps the figures in question, on reflection, would describe the epistemic telos not as true belief but as true belief based on adequate grounds, or true belief reached from a strong position.  Only it must now be asked, why do we want to have adequate grounds? Why do we want to be in a strong position? This question ought to be misguided if true belief based on adequate grounds or true belief reached from a strong position is in fact the purpose of inquiry. For there is no good answer to the question of why we desire our ultimate ends. But the question is hardly misguided. In fact, we cannot even specify what it is to have adequate grounds except that these grounds tend to establish that the proposition in question is true; we cannot even specify what it is to be in a strong position except as being in a strong position to get the truth. This indicates that the purpose of inquiry can be formulated without reference to the notions of ground or position. Thus, on the views in question, believing the truth is in fact our overarching epistemic telos with regard to particular propositions, on the only plausible conception of justification. Hence, on these views, knowledge is merely true belief.

Deon fails. Epistemology is fundamentally teleological. Sartwell 91

Crispin Sartwell. Knowledge Without Justification. 1991. http://www.crispinsartwell.com/know.htm

The most elaborately developed normative theories are in ethics, and thus normative epistemology often relies on a parallel to ethics. Ethical systems have been divided into two kinds: deontological and teleological. Proponents of the former think[s] of moral action as what is done in obedience to principles which serve in turn no end that could be looked on as an overall moral goal. Moral action is to specified in terms of obligation and permission. If I do only what is permissible (possibly, if I do it because it is permissible), or what is demanded by duty (possibly, if do it because duty demands it), then I am not subject to ethical disapprobation even if the result of my action is disastrous. According to proponents of teleological ethics, on the other hand, an action is morally good when it conduces to some goal, for example, the greatest happiness of the greatest number, or if it is in accordance with some rule the observance of which so conduces. Similarly, there might be two sorts of normative epistemology: one which prescribes duties and permissions in generating beliefs (and other propositional attitudes) without regard to any overarching epistemic goal, and one which prescribes some goal for epistemic activity, and recognizes the legitimacy of any procedure that conduces to that goal, or, alternately, of any procedure which accords with certain rules the observance of which in turn conduces to that goal. I have asserted that knowledge is the goal of inquiry. But this supposes that inquiry has some goal, which would be denied by a proponent of deontological normative epistemology. So we had better start with a discussion of whether that position is plausible.  The taxonomy of normative epistemology suggested by this particular parallel to ethics has been developed by William Alston. Because my discussion follows his to some extent, I should pause here to diffentiate my use of terms from his. Alston uses the term `deontological' to distinguish systems which epistemically prescribe, proscribe, or permit certain beliefs or belief-generating procedures, from what he terms `evaluative' systems, which merely assess certain beliefs and procedures from the standpoint of some standard.(19) He points out that it is not the case that all standards of evaluation depend on such concepts as obligation and permission, that not all standards carry with them the implication that the subject is praiseworthy for meeting them or blameworthy for violating them. For example, to say that some person is beautiful is to evaluate her appearance positively, but it is not to say that she is praiseworthy for her appearance, since she may not be responsible for it; it may be a genetic endowment.(20) The relevant point here is that both sorts of systems (Alston's `deontological' and `evaluative') are what I term `teleological'; he describes both as being directed to the goal of generating true belief and avoiding false belief.  It may be a question, then, whether any philosopher has seriously held a deontological position in my sense, has seriously held that we have some epistemic obligations but that there is no overarching goal of inquiry. Some extreme idealists and postivists, who identify truth and justification, may harbour such a view. If one has a coherence theory of truth and also a coherence theory of justification, for example, then one may simply count as knowledge whatever beliefs are generated by whatever procedures turn out to embody justification; if it was supposed to be a sheer fact that we ought to follow such procedures, if there were no further goal in mind, this would be a deontological position in my sense. The notion of knowledge is in some sense superfluous on this position; at least, it does not describe a distinctive purpose for inquiry above the fulfillment of certain duties or obedience to certain rules. Clearer examples of deontological views could be proposed: for example, believe all and only the propositions contained in the Bible, or in the writings of Mao.  Deontological views in my sense have, these days, few proponents, and seem on the face of it extreme and implausible. Their implausibility can be brought out in the following way. What is the source of our epistemic obligations? Or, to put it another way, is there any good reason to think that we have any distictively epistemic obligations at all, in the absence of some overarching purpose for inquiry? The same problem arises for deontological moral theories, but here there are plausible, or at least fairly widely proffered, answers: our moral obligations derive from God, for example, or from the state. Again, it is possible that the very same sources yield our epistemic obligations. But to establish this, we would have to give good reasons to think that God does impose epistemic obligations, or to give an account of the "epistemic legitimacy" of the state. Furthermore, there no longer appears to be any distinction between moral and epistemic constraints on the generation of beliefs. There no longer appears to be any distinctively epistemological enterprise. 

Util coheres with the fact that knowledge reduces to mere true belief.

Petersen 11

Steve Petersen (Niagara University). “Utilitarian Epistemology.” February 10th, 2011. http://stevepetersen.net/professional/petersen-utilitarian-epistemology.pdf

To ask “why is knowledge of more instrumental value than mere true belief?” is, on this picture, like asking “why are earned profits of more instrumental value than monetary windfalls?” The answer to the financial version of this question is clearly that the earnings are not more valuable. By analogy, then, neither is knowledge. The epistemic utilitarian embraces this conclusion and denies the intuition that knowledge is better than mere true belief, even on the instrumental version of the value question. The reason, illustrated by the analogy, is fairly simple: like anything but welfare, epistemic states are at best of instrumental value, and (as we noted earlier) generalizations about instrumental value only make sense under uncertainty. Generally charity is more valuable than murder, but to the classical utilitarian (and to the classical utilitarian alone) it is not sensible to ask “why is a charit[y]able act more valuable than a murder that results in the same amount of utility?” To assume there is an answer here begs the question against the utilitarian. The same goes, one step down the instrumental chain, for the question “why are earnings more valuable than windfalls?” Under uncertainty, investments with high expected monetary value are in an important, instrumental sense more valuable than those with poor expected monetary value, but this question builds in the assumption that both result in the same monetary value (given of course that all else is equal). Finally, the same goes for knowledge and lucky-but-true belief; in the description of the case, both have gotten the relevant epistemic (instrumental) good. To stipulate that, despite the odds, luck-sensitive belief formation nonetheless resulted in a true belief is just like stipulating that the murder under consideration ended up net benefitting people, or that the stupid casino bet ended up paying off.