Well, to an extent the statement is true. Experimentation begins with a simple statement, a statement of what a researcher believes will happen under certain conditions. The hypothesis is formed and is essentially an educated guess. An experiment is then done to either reject the null hypothesis or accept the null. Yet the original hypothesis is still a guess as to what will happen. The experiment is meant to provide evidence to support that hypothesis or belief. Time, money and effort goes into this hypothesis as others have faith in the researcher’s belief. There is no actual evidence though until the experiment is performed. So at the core of science is a belief with lack of evidence. The scientific method and methods of research are designed to gather evidence, to provide a framework that others can review and critique in regard to the evidence.
No, it doesn't quite work like this. A hypothesis (in effect, a speculation) is indeed formulated to account for what the researcher believes will happen under certain conditions - it may hold some strong consistency and rationales, but it remains, before testing, as you say, an educated guess. However, it it a mistake to simply think that one
experiment is all that is required to establish the veracity of a hypothesis; many, many other experiments need to be performed, in order to establish not only that it the hypothesis accords with reality, but that it is also resistant to various methods of disprovablity. For example:
Hypothesis: 'Any card I turn over from the top of a deck will be of the suite of diamonds.' (presuming, for the sake of argument, the researcher has observed facts that leads him or her to believe this)
Experiment: the researcher turns over the top card. It is the jack of diamonds.
Now, are you suggesting, on the basis of this one experiment
that this makes the hypothesis viable enough to be considered an acceptable model of reality?
You can see here the flaw in the idea that experiment exists only to prove any given hypothesis (please note that I accept that 'proof' is really only applicable in mathematics; I'm simply using the concept here as a more concise way of saying 'that which is supported by reality'); experiment is there to disprove
a hypothesis, and if it resists all possible attempts at disproof, the resultant data is then passed on to other scientists for peer review. Here, the experiments are repeated again and again, by scientists who are not there to support their 'faith' in the in the initial experimenter's 'belief' but to again test the hypothesis against disproof - this is necessary to ensure against any bias on the initial scientist's part. Again, if the hypothesis holds up against the tests, and the data these scientists produce accords with the initial data, then the hypothesis becomes a theory. (There's more to it than that, but that, in essence, is the process in a nutshell)
Now, theory, it is true, has often been conflated with the concept of hypotheses or speculation, but a genuine
scientific theory must hold to the following principles:
1. That it is supported by whatever facts are available.
2. That any present facts do not falsify it.
3. That it is, however, potentially falsifiable (I've detailed why this is important in my earlier post)
4. That it remains testable.
5. That is has predictive power.
A theory, therefore, is an explanation that is constantly being tested against reality
- and the longer it survives these tests, the stronger it becomes, to the point where is reaches such a high state of probability as to be considered effectively true. It is not
simply about 'believing' in a hypothesis; it's about offering a model of reality which is constantly
tested against the facts as they are observed or emerge. For an example of a good, working theory with a high degree of probability, let's take Evolution by Natural Selection, and see how it accords with the principles outlined above:
1: It is supported by various things such as the fossil record, chromosomal fusion, mitochondrial DNA, and many other items too numerous to go into here
2: There are none: despite what many creationists say (without testable evidence) the fossil record, for example, clearly delineates a process of simple organisms evolving into more complex ones. We do not have polar bears native to the Sahara desert; likewise, we do not have elephants native to the Arctic circle.
3: As Haldane so bluntly put it: 'the discovery of rabbit fossils in the precambrian.'
4: Every palaentological dig is, in effect, a test of the theory of evolution. Also, Lenski's Long Term E-Coli Evolution Experiment (whcih I'll go into below)
5: Perhaps the trickiest condition to fulfil, in that an enormous number processess contribute to evolution, and that it occurs over a long timescale. However, it is possible to observe it when you have creatures that can span generations within a very short space of time, such as bacteria - as happened with the long term E-coli experiment. To cut a long story short, a number of E-coli populations were separated from an initial culture, to see how they would fare in their own, exclusive micro-environments. Since then, we already have a number of evolutionary adapatations within certain populations. But the most astonishing thing about this experiment was this it disclosed the first seeds of speciation as predicted by the theory
: one population suddenly had a massive growth spurt. Why? Because it had evolved the ability to draw energy from a previously undigestible materia (citrates).
You can find more about this experiment here: http://en.wikipedia.org/wiki/E._coli_long-term_evolution_experiment
I hope this demonstrates that by no means could one consider the core of science as being one of 'belief with lack of evidence'. It is, in fact, the opposite: understanding on the basis
I do wonder if there is an understanding that there are forms of subjective measurement, such as surveys and questionnaires. Science does not disregard subjective, but instead makes tools to measure the subjective information. Subjective information is important and taken into account in various ways, but subjective information cannot truly be measured through direct observation. For instance, pain is a subjective form of information and so a researcher must develop a tool to measure pain. Science cannot eliminate the subjective because often times the subjective is what is being studied. My feeling is that you are attempting to discuss researcher bias, which goes far beyond simply just subjective perception.
When science utilises tools to measure information that is subjective to someone, it always strives - if both necessary and possible - to do so by objective means. In the case of pain, for example, one could objectively measure the subject's physical response to it - perspiration levels, for instance, the release of stress hormones, rate of respiration, and so on. In most cases, however, it is fair to assume that someone is in pain when they say so, so no tests need to be done. But an objective means to test pain is certainly there, simply by assessing the physical responses of the biology. Besides, if pain were indeed wholly subjective, then how come morphine has such an effect in reducing it?
There are types of research that utilize subjective perception in another fashion. This might be gathering eye witness reports to an event, such as a nuclear blast to study the effects of the blast. Researchers might interview victims of rape to understand trauma or interview veterans of a particular battle to understand the stress of combat. That information can then lead to a hypothesis, once more an educated guess. From there experimentation can begin.
Suffice it to say, the police and judiciary strive to attain as much forensic evidence - such as DNA fingerprinting - as possible in a court case; it is not, if ever possible, left solely to eyewitness reports (though it does, admittedly, depend upon the severity of the incident in question). In the case of a nuclear blast, there aren't likely to be any witnessess left. The subjective reports as incurred by trauma situations certainly does help in undestanding the objective circumstances that induce it; however, the science resulting from that must be as objective as possible in nature - and not what people may subjectively make of the information - otherwise it explains very litle and helps very few.
Once more, I think there is some confusion about the idea of logic. Logic does not need experimentation to be shown as valid. Logic is an exercise of thought. A = C, B=C, A=B. That is a logical proof. There is no experimentation there, there is no testing against reality. Two premise are proposed (A = C and B=C) and then a valid conclusion is reached (A = B). Socrates typically showed people how their logic lead to poor judgments and lead to statements that they did not find to be true. Logic does not have to be true, logic just has to be valid. There is a big distinction here.
For instance, my car is a Honda. All Honda are blue. My car is blue.
We all know that determining the color of my car by the simple statement of the car company does not work. The statement is valid though, just not true. Were I to turn that into a professor in a course on logic, he would consent to the validity of my statement. Hopefully he would not consent to the conclusion being true. The truth of the statement relies on the premise being accepted. There in lies the problem of living a “logical” life. The premise must be accepted and a person accepts and rejects a premise based on their beliefs, past experiences and observations. Nobody ever believes they are being illogical or irrational.
Well okay - we can all play with logical abstractions when there is no correspondence to reality, and come up with such nonsenses as Xeno's paradox or *shudder* Anselm's ontological argument. However, if your premise is that logic is a poor tool to apply to reality, then I can only ask...what conceptual tool would you propose to replace it with, in order to help better our understanding of reality?