Oy, it seems as if you want to redefine things for your own purposes. I will attempt to explain this again, but I will respectfully ask you to please go and read some of the history and philosophy of science, including formal logic. I can tell you that there is no philosopher of science that I have seen that agrees with you that deduction is widely used doing science.
Let's try this again.
BGoodForGoodSake wrote:I understand this. When making a hypothesis the deduction is made assuming that the theory is true. The logical process is the same the only difference is that it is acknowledged (after the deduction has been made) that the original assumptions are falsifiable.
I am not sure you do understand this. You know the form of logical reasoning is a syllogism. In its simplest form:
1. Major Premise
2. Minor Premise
According to your reasoning, you are taking the general theory, and assume that it is true, and that becomes your major premise. However, the whole reason for entering the argument in the first place is because the theory is uncertain, otherwise why do it? You are trying to support your theory, but cannot absolutely prove it, as described below. By definition, that process is described as inductive reasoning, not deductive reasoning, and you said it in your last sentence, it contains uncertainty as to the validity of the premise and therefore the conclusion. When you talk about uncertainty in this fashion it relates to induction, not deduction. And this is exactly the problem that Newton ran into with his hypothetico deduction method. Read on:
The logical argument for this method is:
1. If A, then B.
3. Therefore A.
Unfortunately, this is not a logically valid argument. It contains the fallacy of affirming the conclusion.
No this is incorrect, it is as follows.
1. If A, then B
3. Therefore ~A.
Let's substitute: T= the theory and D=the data set that is expected if the theory is true. You would then have, according to your reasoning:
1. If T, then D. (If the theory T is true, then, we will see the data set D)
2. D (We measure/observe/confirm the data set D)
3. Therefore T (The theory is confirmed as true)
This is exactly what you have been describing all along. We take the theory, then we test it by measuring, and through that we confirm whether the theory holds true or not. This is not a valid logical argument, it commits the fallacy of confirming the consequent. The valid logical form is:
(This is the Modus Ponens form of the argument.)
1. If T then D.
3. Therefore D
This is invalid in the world of science, since you have not proven either D or T to be true. T is the result of an inductive process, by your definition, while you cannot ever conclusively arrive at D without measuring all the data related to the theory everywhere. Also, there may be other explanations for D, other than T.
You correctly stated the Modus Tollens form of the argument:
1. If T then D
3. Therefore ~T
Similarly, what makes this invalid in the case of science is that you cannot empirically verify ~D, unless you measure all data relating to the theory everywhere. You cannot logically and absolutely say that ~T is true, since you have not conclusively proven ~D. Also, read below on the Duhem-Quine problem.
So, back to inductive and deductive reasoning. From the above, we can say in science that in either form of the argument we see a subset of dataset, but it is never complete, so the best we can do is say that it is a good approximation of the data we expect to see under normal conditions everywhere. Then, since we cannot with certainty say that D is true, and we arrived at T through inductive reasoning as well, there is no way that it constitutes a deductive argument, it is inductive all the way, otherwise it is logically fallacious.
No the empirical consequences of the theory comes from the ability to test conclusions, reached by deduction, from the assumption, that the theory is correct. As you can see from the logical statement above a theory can be tested. We can only verify that a theory is false.
Nope, that is wrong, as demonstrated above. But I think you misunderstood what I said.
If what you say holds true, then no theory can ever be even approximately true, because one will never dismiss the theory, one will just dismiss just one of the primiary or auxiliary assumptions. Sure a theory can be tested, but to test the theory you need to assume as true a whole lot of other things from outside the theory. The dependence on background assumptions is called the Duhem-Quine thesis:
"Any experiment taken to disprove an hypothesis can be rendered compatible with that hypothesis by denying instead an "auxiliary assumption.
Thus, individual hypotheses cannot be disproven."
Following, then empirical consequences of the theory depends on those outside assumptions as well, and you cannot conclusively falsify (or prove) the theory without proving or disproving the assumptions too. You can only test the whole package of theory and auxiliary assumptions. If we have reason to believe that the background assumptions are acceptable, i.e. we assume them, then we can rationally, if inconclusively, arrive at a theory that is confirmed or disproven depending on the empirical test.
I understand this quite well. What I don't think you seem to be grasping is that by using deduction and then falsifying the conclusion it is possible to falsify the original assumption. In this way all assumptions are assumed to be just that and are subject to falsification.
Uh, ok, wait. I be lost. So what you are saying is that we are going to assume that all the assumptions are assumed, then through the process of deduction, we are going confirm that we assumed our assumptions?
Also, you are not necessarily falsifying the original assumption if you refer back to the Modus Tollens argument above, it is just "maybe" falsified. Unless you measured all the data, everywhere...otherwise it is an inductive argument and not a deductive one.