Why aren't you a utilitarian yet?
Why aren't you a utilitarian yet?
Materialism is the bluepill
i wasn't vaccinated as a child
cuz i aint no gay
Utilitarianism is the dumbest and gayest Ethical Theory.
It's fucking wild how popular utilitarianism still is as an ethical theory, considering how out of place it is outside the 19th century.
I'm not a utilitarian because I've yet to hear any version of it that was actually applicable to real life decision. When you take into account the fact that you have to either cultivate good habits or live by strict exceptionless rules in order to act morally under most circumstances, because you aren't primarily a moral computer, then all utilitarianism just collapses into a kind of virtue ethics or deontology.
There is no secular argument against harvesting the organs of mentally impaired humans with intelligences less than or equal to livestock.
not a tween anymore
Intelligence = potential for happy or bad experiences?
Yes
Because it's an entire philosophy built on nothing, since we all know the obvious ways to avoid doing harm already, and none of the subtler information is mentioned.
because its fucking retarded
Because haha guys should fuck my wife if they enjoy it more than i do.
Because I’m not a subhuman Anglo
Because...
I hate..
NIGGERS
MUSLIMS
GAYS
TRANNYS
FEMINISTS
JANNIES
based
I am a utilitarian.
Everyone in this thread is just buttmad because an Englishman or mathematician fucked their crush. Not my fault us English mathematician Chads are so lusted after.
Is the closest of you to making any sense. But my dear friend, it isn't utlitarianism that collapses into virtue ethics or deontology, but rather the other way around. I'd ask you to read On What Matters by Derek Parfit but it's 2000 pages, so just read a summary.
There is no real reason to be.
>Why aren't you a utilitarian yet?
I am, and its why I'm a white supremacist
Then let us cut you open and give your organs to someone smarter.
It does not take seriously the distinctions between persons.
everyone is secretly
I am to a degree, but I only want good things for Slavs and other Orthodox people.
i am, and a fascist
spbp
>you aren't primarily a moral computer, then all utilitarianism just collapses into a kind of virtue ethics or deontology.
Oof brother. No it doesn't. You have to seperate things based on the fundamentals. Utilitarians have never, repeat, NEVER, asserted that in all decisions one makes, we should go back to the greatest happiness principle and deduce how we optimize happiness. Bentham, Mill, and Sidgwick all recognized the impractical nature of trying to refer back to the greatest happiness principle for every decision. However, they believed we could form guidelines of behavior informed by an empirical analysis and the ghp. All geometry reduces back to a limited set of axioms. Am I not allowed to just use the pythagorean theorem to calculate the distance in a coordinate system, or must I explain how the parallel postulate and everything else culminates in some conclusion? It is thus essential for a utilitarian to evaluate rules and virtues for practical reasons, yes, but if you evaluted whether we OUGHT to value or not value rules and virtues based on the happiness they statistically produce, you're a utilitarian. This is what utilitarians are doing when they do discourses on applied ethics topics. Someone like Kant, on the other hand, would evaluate rules based on the categorical imperative and his concept of duty, while Aristotle would ask which leads to human flourishing via the perfection of their function.
Superficially you may find similarities between moral systems. To differentiate you have to look at fundamentals.
But I am more intelligent than livestock.
Thats simply not measurable. Intelligence is not a quantifiable worth that you apply to this or that.
We always judge from the perspective of the human mind but how intelligent are we really seeing how we are literally just a few hundred years from total extinction
Different user, but isn't it happiness that is not measurable?
How would you measure anything like that?
I have no clue. Isn't this what utilitarians purport to do? What are we talking about?
what is this philosophy called?
>How would you measure anything like that?
In sps - stoves per second
NazBol Gang
>Thats simply not measurable
It most definitely is. Maybe not with extreme accuracy, but we understand most pigs are capable of empathy and other social behaviors dissimilar to things like reptiles or fish, some of which are said to not even have the brain power to feel pain. Pigs are also stated to have certain cognitive abilities on par with a 4 year old. There are definitely ways in which we could measure mental impairment as a comparison to livestock. Even the most vegetative states a human would be less intelligent than pigs, say a lobotomized human for instance. My thinking with the argument of human harvesting is that we justify the slaughter of animals based on our percieved intellectual differences. Think of the way we treat plants, or for some, bugs. We are of the understanding that these organisms do not feel pain, and thus taking their lives shouldn't bring us guilt. We do make intelligence judgements, if you think that we don't, or can't, you are not looking deep enough. My argument is that we should logically be slaughtering humans below the threshold of whatever the highest intelligent animal is that we slaughter already, which on mass is probably pigs. There is no secular argument against this. One could suggest that humans have souls, and animals do not. Even then, most modern nations do not acknowledge or judge by the existence of a soul within their system of laws, so why should a human with intelligence less than or equal to that of a pig ever be protected? One protection would be the same that some pigs possess today, and that is ownership and pet rights. Some countries allow citizens to hold pigs a pets, and thus those specific individuals are obviously exempt from slaughter. So, we could revise the logic to say that only unwanted mentally impaired individuals should be slaughtered and harvested. This brings up a very interesting tangent as well, which links to a point I made earlier about children at the age of 4 sharing certain intelligence functions with pigs. I would suppose, that an adult pig is probably all around more intelligent than a fetus, or an infant, or maybe even a year old child. Whatever the cutoff is, there should be an equal lack of protection for the children below it with intelligences less than pigs. The difference is that most parents want their children to live, and thus they fall under the same "wanted ownership" as the pet pigs do. But what if the child or baby is unwanted? He should be slaughtered and harvested, logically speaking. We already ARE making this judgement call with abortions. We deem fetuses at a certain point in the womb as being unintelligent, and thus "safe" to kill. I think, with our treatment of pigs, it would be most logical to see unintelligent humans treated in this way in the future. I don't think it's super likely, for the same reason it exists today, in that we pull curtains over our eyes and play doublethink with our relationship with slaughtering livestock.
>pigs are almost as smart as we are so intelligence is measurable
who told you that intelligence is determined by humans? who even told you that there is anything to this concept and that intelligence isnt just entirely made up?
Crime and Punishment murdered utilitarianism for all time.
Morality is for the people you care about, not the people you don't. There's no morale reason to do things for people you don't care about. Morality is just sympathy for people. If you don't have that, then it is just a spook.
Meanwhile for the people you do care about it is wrong to think of it as caring about their happiness. You care about their will, so their goals, wants, values. Hypothetically happiness should come if they achieve these. But happiness is not the goal itself
I never claimed humans did or did not fabricate this perception of intelligence, only that we exercise this intelligence judgement with our treatment of life currently (and in law), and thus it should enable other avenues that it currently does not, like the harvesting of humans. I'm not saying that we are morally correct in any of this. I'm not, for instance, saying we should or should not harvest plants for fear of taking their lives. I am only stating that we do, and we do so because we have deemed them unintelligent. If A is true, then B should also be true by the same logic. That is all that I am stating. I would have hoped my lack of bias would have been clearer in the way I talked through this.
If you want to glean some sort of vindicative judgement from my angle, I think this user summarizes it pretty well. If we were forced to come to terms with our meat production process, we would not partake in it. Most people in developed countries would not slaughter a pig by hand to eat it, some not even watch a slaughter to eat it. Meat would become more scarce, and the methods of harvesting would be more humane. They currently are neither scarce nor humane, and we ignore the moral fallacy of it all for the benefit of a luxury in surplus. The vast majority of meat consumed on the planet is not consumed out of necessity. You need only look at livestock populations to prove this. The fact that an act exists that is deemed so intolerable to even observe by the same people who reap the benefits of the act is a pure moral contradiction.
Do paragraphs have utility value?
Mill would probably see sympathy with this view, as he thought the only intrinsic good was happiness. However, he'd add that in our rational self interest, it becomes natural for us to have to reason about the interests of a collective. If one country attacks yours, that effects you. If the education system is terrible, that will eventually effect you, even if you had good home schooling, like if you wanted to start a business but couldn't find intelligent enough workers. I could go on. Almost all individual problems have some root to the collective. Where Mill goes from here is if you continue to consider others as a rational extension of your self interest, you will eventually care for the collective on an instinctual, emotional level rather than on a mere self interested and rational one. An attack on society has a visceral reaction as being akin to an attack on yourself even if you think you can avoid most of the negative impacts. There's an analogy he uses elsewhere that applies here, where while we do not intially value money in itself but instead value what we can get out of money, we eventually learn to love merely having money by positive association, even if we do nothing with it. Similarly, continue to care about society as a rational extension of your self interest, and you will eventually find yourself valuing society's well being as a whole in itself. The egoist who does not eventually adopt a utilitarian stance, then, is someone who is not actually looking at his own self interest but instead erroneously sees himself as an island that stands alone, or he is someone who is at least abnormal in being able to form the positive association through this regular exercise of extended concern, as with say a sociopath.
this is talking of practical and realistic egoism. It's not an argument for a morality of utilitarianism. I would just find some commonality with them.
It also jumps from saying that problems of others will have an effect on me to saying I should help them, which is true in some cases not in others. If the tides are great enough especially, it is will often be better to prep yourself, dig a bunker, hoard some things.
The argument is less that it will always be in your self interest to help others and more that you will eventually just find yourself seeing the common interests of society as an extension of your own if you keep exercising concern over it, initially for selfish reasons. The argument is less convincing in individual cases, as with say, whether you would care if you hear some stranger got shot a city over from yours, and more convincing when you consider whether you'd care if the murder rate in your country was climbing up, even if you had armed security that could guarantee your safety. Most people desire that positive metrics for society keep indicating improvement.
Also, on helping people. It's actually irrational from a utilitarian perspective to help people at all times as a rule, especially if doing so would be damaging to yourself. For if that were a societal trend, misery would go up. Utilitarianism shouldn't be confused with altruism.
I also don't see a sort of psychological egoism and utiltarianism as necessarily in conflict. It's common for philosophers to be both.
I thought that the question was more of the end goals. As in, a utilitarian will be for the ultimate greatest good, not the immediate one. And so preserving myself in the short run so that I could help others in the long run would be utilitarian. If I'm only protecting others to help myself then it isn't.
I don't disagree that they could overlap, but it at least has to start with egoism, and sort of trickle out onto others. But it might trickle out unevenly. And that would all be "altruism" nested within egoist morality, not egoism nested within altruism