[quote]dhickey wrote:
orion wrote:
Well if I saw that in an “economic” sense you would want me to make an utilitarian argument.
Something like “if more harm than good comes from action X we should do it”. However, I know a thing or two about consequentialist ethics. Not only do you have no way of determining when we have done “more good than harm”, because we cannot objectively measure it, but you leave yourself wide open for fallacies like the Nirvana fallacy or the ever popular “well, had we not done that it might have been worse”, which really is the Nirvana fallacy in reverse. What makes these fallacies so popular is that within the system of a consequentialist ethic you never know what might have happened- you do not weigh certain outcomes but probabilities of outcomes which you, as a mortal being cannot possibly do.
Ok, now we are getting somewhere. When it comes to a particular policy or intervention, i absolutly agree. I don’t think that is what we are talking about here. We are picking up individuals that have or would do us harm. If we gather information that leads to an actionable offensive that in turn twarts an attack, the benefit could be weighed. If you are talking about broad policy like our foreign policy, wire tapping, concentration camps, fire bombing, dropping of an atomic bomb, etc, I would agree that there are entirely too many variables to calculate.
I am curious how you would apply this to a specific tactic like interogation. Are you saying there are too many consequences in interogating known enemy combatants and that we can’t actually assess the results? I am not convinced of this.
So what my economic training tells me is that you have no way of knowing whether you actually helped or hurt your case by torturing someone.
I think you would agree that we can measure accurate infomation gathered? I am assuming you feel we cannot calculate the harm that may have been done? I don’t agree, so I will ask what harm you think we cannot account for.
What my life experience tells me is that every area were governments are allowed to make utilitarian arguments, especially in a democracy, everything goes to shit. Every time a politician wants to do anything “for the greater good” that would otherwise not be permissible, I am almost by instinct against it.
i would agree, but in this case what is the alternative. optimal is not always an option. Sometimes you have to play the hand you were delt. I don’t want to get into an arguement about whether or not we delt the hand, it’s our hand non the less.
do we ignor attacks on our soil. do we ignor the reality that other attacks are being planned?
But, let us say I made an utilitarian argument:
In the long run, introducing torture to the US will hurt more Americans than it saves.
how so? how about in relation to other tactics that opponents of torture seem to support?
So what have we gained?
You predict one outcome, I another, and the truth is we both don�?�´t know.
Well, like economics we have to apply logic. Reason must be contructed and attacked.
Americas history is full of interventions that seemed like a good idea at the time and weren�?�´t.
Not arguing this. We are talking about a specific tactic that should have measurable outcomes, good or bad. Much like the jailing, punishment, or any other coersion that takes place abroad or at home. I am starting to think that you are lumping the interogation of enemy combatants with the overall conflict. I would like to seperate it, as i think that was the original intent of the thread.
So, when we simply cannot know what will come out of this, why not behave like decent human beings?
What is a decent human being? Where is the line? “Interogaion” and “torture” are just words. Who decides what moral or “decent” interogation techniques are? Do we decide unilaterally? Do we come to common ground with those we are in conflict with?
[/quote]
I do not think that you get the core of my argument.
What you are trying to do is utilitarianist reasoning.
Even Bentham, who “invented” it knew that that requires a “util” , a basic unit to measure utility.
If such a thing existed it would all come down to a mathematical equation.
If action x produced an outcome of 70 utils with a 30% certainty and an outcome of 180 utils with a certainty of 70% you could, in theory, substract 70 x 0.3 + 180 x 0.7= 147 utils from another person , f.e. by hurting them quite a bit. The result would be ethically neutral.
Since this is basic welfare theory it does not quite work that way.
First of all we have no util. That should be quite clear because exactly the fact that marginal utility gets smaller means that it cannot be cardinal and therefore also not commensurable between human beings.
Second, we do not know what outcome has what probability and we simply cannot know.
Third, even if we knew, we are responsible for all outcomes, ad infinitum. That means that you simply cannot know because we are talking about infinite possibilities with no logical cut off point.
To sum it up, Utilitarianism is like Keynesianism in that it is untenable but it allows us to justify our most stupid ideas.