Philosophers of Science have long puzzled over what they call “the” demarcation problem, of how to distinguish science from pseudoscience. In the early 20th Century the Logical Positivists proposed the verification principle, that a statement was meaningful and scientific only if it could be empirically verified. Karl Popper then proposed a similar idea, that a scientific idea is one that can be falsified.
There is a lot of truth in both proposals, but neither can be interpreted too narrowly. The problem is that no statement can be verified or falsified in isolation. Science constructs whole webs of ideas, and it is the whole construct that is then compared to empirical data, to be adjusted and improved as necessary. Further, a statement such as Newton’s law of gravity can never be verified in the general sense, all we can say is that it worked well enough — as part of the wider web of ideas — in the particular instance we tested. Nor is it straightforward to falsify such a law. If our overall model is inconsistent with an observation then we could indeed alter one of the laws; but we might also overcome the inconsistency by altering some other part of the overall model; or we might doubt the reliability of the observations.
For example, Newton’s law of gravity was indeed found to be inconsistent with the motions of stars at the edges of galaxies (and since gravity is very weak at the edges of galaxies, and since Newtonian gravity is the weak-field limit of General Relativity, it should work there). So either we must change the law of gravity (perhaps to Modified Newtonian Dynamics) or we hypothesize that there is additional “dark matter” that we have hitherto overlooked, and which then makes the gravity law consistent with the observations.
As another example, when observations at Gran Sasso laboratory suggested that neutrinos were traveling faster than light, in violation of Einsteinian relativity, the resolution was not that relativity is false, but that the observations were faulty owing to a poorly connected cable.
The realisation that we are comparing whole webs of models to empirical reality — referred to as the “Quinean web” account of science among philosophers — and that we have considerable latitude as to how to adjust models to ensure a match, such that we can never subject isolated statements to strict verification or falsification, superseded the Logical Positivist and Popperian accounts.
This, however, then landed philosophers with “the” demarcation problem. After all, pseudo-sciences are adept at avoiding falsification. “It never works when a skeptic tests it!” is a blatant but typical get-out. And “Sometimes God says no” is part and parcel of the claim that God answers prayers. Further, the believer in homoeopathy or crystal healing can simply reject the entire concept of double-blind controlled testing and proceed on anecdotes and the placebo effect. Given sufficient special pleading and ad-hoc excuses their world view can be satisfactorily reconciled with any facts about the world.
As a result some philosophers have concluded that there is no straightforward method of distinguishing between science and pseudoscience (while the post-modernists concluded that all such world views are merely social constructs with no more or less validity than any other). Meanwhile, through all of this, scientists themselves proceeded largely oblivious to the discussion and having no problem in practice distinguishing science from pseudoscience.
As I see it, the distinction can be stated straightforwardly, as being a matter of quality control. The principle is well-stated by Richard Feynman, in a speech about distinguishing good science from bad science that is still worth reading over 40 years later. Feynman sums up science saying:
The first principle is that you must not fool yourself — and you are the easiest person to fool.
All humans are prone to fooling themselves, and many methods of science (such as double-blind testing, or using statistics rather than gut feeling) are about trying to reduce the influence of human bias on the end result.
Good science is an aspiration, where we try for the best quality control with the fewest mistakes and the smallest influence of bias. Bad science is done sloppily or poorly or lazily or in a biased way. Pseudo-science results when the wishful thinking has run so rampant that it now dominates the enterprise.
“Quality control” provides a one-phrase demarcation between scientific and un-scientific approaches to advancing knowledge. Of course quality control is a matter of degree, and thus does not produce a sharp demarcation, but that is appropriate. All scientists know that there is plenty of sloppily or badly done science that still qualifies as “science” in the sense that it is published in reputable scientific journals.
It is equally obvious that much high-quality advancement of knowledge occurs in fields that are not usually regarded as “science”, such as history. But that division is merely an arbitrary human one, a tradition of regarding human concerns as somehow distinct from the natural world. If we’re discussing the basics of epistemology we should not make any distinction between high-quality work in history and high-quality work in historical sciences such as geology or paleontology or much of astronomy.
One could object that saying “quality control” doesn’t amount to much of a criterion unless one then specifies how the quality control is done. That’s a fair point, and there is no simple set of rules that amounts to quality control. Instead, science has developed a whole set of methods and attitudes for arriving at models that have the most explanatory and predictive power.
Many accounts of the methods of science are too simplistic. For example an emphasis on experiment and replication is good where it can be done, but is less applicable to historical sciences. Feynman’s instruction not to fool oneself governs everything, but from there the set of “scientific methods” appropriate to any situation is something that has been worked out by seeing what works, and thus is itself a product of science. The ultimate test is what has the most explanatory power and, in particular, what has the most predictive power. In the end, science is verified because it enables technology that works and because it enables us to predict things that we didn’t already know.
The above example of gravity is instructive. Modified Newtonian Dynamics does an OK job of explaining galaxy rotation curves (that’s what it was designed to explain), but if you then try to apply it other areas, such as explaining the fluctuations in the Cosmic Microwave Background, it doesn’t work at all well. The dark-matter model, in contrast, has proven to be far more robust and useful in explaining and predicting new observations that have come along, to the extent that the existence of dark matter is now widely accepted.