What is Science?
By John McCormick
THAT MAY SEEM A STRANGE question, but it is a very important one because a knowledge of some of the arguments raised in the study of the philosophy of science shows both the strengths and severe limitations of today’s science.
Is science an open-ended quest for the most basic truths of nature, or a closed (mostly boy’s) club that only listens to ideas from card carrying Ph.D.’s, and then only ideas which fit comfortably within current mainstream thought?
With science and technology playing such a basic and ever increasing role in our lives and promising to be the only way civilization can hope to survive the challenges of population growth and resource depletion, a better understanding of just what science is, what it is supposed to do, and how it really works is, quite possibly, even more important than a basic knowledge of what science has discovered.
Bad Science Can Reach Out and Touch Anyone
Want an example of why you should care? “The Economist,” October 19, 2013, the “How Science Goes Wrong” issue, reported that only 11 percent of all cancer studies could be confirmed.
So what? Well, clinical trials for everything from heart valves to anti-cancer drugs are conducted based on published research. The same issue of “The Economist” reported that in the last decade 80,000 patients joined clinical trials based on flawed research which was later retracted.
That is an example of poor or even falsified research results, but what if even the most elemental laws of physics are based on mere circular reasoning, the sort of thing that gets you an F in any logic course.
Are we putting too much faith in the basic truths that scientists announce with great fanfare every few years?
Even more importantly, science may be often asking the wrong questions, conducting the wrong tests, and therefore producing the wrong results, narrowing rather than expanding our view of nature?
This leaves lots of room for speculative fiction, which doesn’t actually violate basic scientific concepts, but demonstrates that the technological underpinnings of our entire civilization may be a lot shakier than many people realize.
One of the first great examples of circular reasoning explored in depth by philosophers and thoughtful scientists with a philosophical bent began with Sir Isaac Newton and his famous three laws.
The first law, which you may remember from high school science or the last time you perused “Principia Mathematica,” is a lot more complex than most people realize.
When viewed in an inertial (non accelerating) reference frame, an object either is at rest or moves at a constant velocity, unless acted upon by an external force.
This is commonly known as the definition, or law, of inertia. It is also considered by some scientists to define the “inertial reference frame.”
Ernst Mach, the physicist who studied shock waves and is responsible for defining the sound barrier (Mach 1), was also a noted philosopher of science (logical positivist). He took a serious look at what today we might initially refer to as the “semantic” analysis of the first law. He quickly expanded it to a question of exactly what the entire law said, how it was derived, and whether it could even be defended as a scientific law at all.
NOTE: Mach’s philosophical analysis of Newton’s laws is probably the first and certainly one of the most powerful examples of how important the philosophy of science is to the scientific community. The arguments made by Mach presage those developed by Einstein. Of course Mach is also famous for his refusal to accept the concept of atoms because they were too small to measure directly. He would probably change his mind today, now that we can actually produce images of individual atoms.
In short, Mach’s argument goes like this: the law of inertia says that, unless acted upon by outside forces, a body continues in its present state of velocity (position, speed, and direction of motion, if any). Ever since Newton’s time, physicists and engineers have used this law to explain the motion of everything from cannonballs to planets. But, as the law itself was derived from observations of the motion of bodies, the use of this law to demonstrate the existence of gravity and especially to explain motion smacks of circular reasoning.
A philosopher of science would say this law fails to provide any reference to empirical evidence of the force of gravity.
To sum up, a physicist or astrophysicist uses the first law to explain the effect of the Moon’s gravity on the Earth’s tides. But the law was based on mere observation of the acceleration of one body in relation to another, so what does that actually explain? Does it explain anything? Is the law valid? Is it even meaningful, as it only equates one observation to another and not to some basic theory?
Even worse, this law is used to explain the motion of distant galaxies where no test is possible, and the only place where the three laws might be proven wrong because all close up examples have been tested and shown to hold true. In other words, the conditions under which the three laws could be proven wrong are not open to testing; hence, we are only guessing when we apply those laws to distant stars and galaxies.
Another example is the astronomical red shift. We say galaxies are moving away from us because their light spectrum is shifted to the red end of the spectrum. There have been “tired light” ideas proposed that say that perhaps this is due to light somehow losing energy over vast distances. That idea is commonly rejected but can’t be disproven in any direct way.
The argument about Newton’s first law goes much deeper, but a bit of background will help you understand the problems.
The Meaning of Meaning
Because science consists mostly of asking questions about nature, it is important to know which questions are actually meaningful.
Some well-known questions are actually meaningless.
If a tree falls in a forest with no one around to hear it, does it make a noise/sound?
That is right up there with the question of how many angels can dance on the head of a pin, but with fewer variables.
In the tree question, if you tell me precisely how you define noise or sound, I’ll answer it instantly (actually, you’ll have answered it yourself).
The dancing angels question is only slightly more difficult to analyse, but if you define angel, pin head, and dance, I’ll undertake to give an answer and prove it. This question is often cited as evidence of the silly time-wasting arguments which went on in the Dark Ages, but my analysis wouldn’t surprise the people who thought up the question even though it causes vast confusion among those lay people who think they understand the question. In fact, the reason serious theologians actually debated that question was because they were debating the nature of angels.
The problem caused by the inclusion of vague, undefined, perhaps undefinable terms in a question should be obvious.
Now apply this to Newton’s first law. Probably very few physicists would argue if I stated it this way: “An object will remain at rest or continue moving in the same direction and at a uniform speed unless acted on by an outside force.”
Gravity was proposed by Newton to be the outside force acting on everything from apples to planets, but what is the empirical (verifiable by observation and not mere logic) evidence for such a force? If the only evidence is just observations of changes of motion by planets (or apples), then isn’t the first law a tautology, or mere restatement in different terms and more suitable for the department of redundancy than a great scientist?
But put that problem aside for a moment and consider undefined terms. Just what is a “uniform speed?” The usual definition refers to equal distances travelled in equal (and arbitrarily small) time periods. It can’t be an “average” because, as Galileo pointed out, taken over a long enough time period, sometimes an accelerating body also moves at a constant speed.
Let us skip over the problem of defining length or distance because that can be solved easily using a solid rod to mark out distances.
But what are equal times? Do you measure them with a pendulum or the motion of a planet? If so, how do you prove different times are truly “equal?” I won’t belabor the point, but Newton addressed this by stating “measurable” (as opposed to absolute) time is defined by uniform motion. So he defined uniform motion by referring to equal time periods measured by uniform motion. Absolute circular reasoning, or, in less charitable terms, nonsense.
What Ernst Mach did was point out that absolute intervals are only defined by uniform motion, etc., moving the First Law into the realm of metaphysics.
Although you probably never heard of this argument, it was taken very seriously by top scientists. You probably have heard of the next stage in the same philosophical analysis. Einstein was essentially extending Mach’s argument when he showed that the concept of simultaneity is meaningless.
Scientists and philosophers have written volumes on this topic, but I believe I have explained enough for you to see that what you thought was a very basic and simple statement of a law of nature can easily be picked apart and shown to be based on little more than a circular argument. The mere fact that it has proven extremely useful isn’t the same as saying it is based in science. It is really based in metaphysics which merely means “beyond” physics, a subject very familiar to Newton.
In fact, Newton would have said he studied Natural Philosophy which included physics, other science, and also metaphysics. Hence his seminal work, “Mathematical Principles of Natural Philosophy” or “Philosophiae Naturalis Principia Mathematica.”
If such a seemingly simple scientific law can result in decades or even centuries of argument over whether it is even meaningful or is merely a circular argument, consider how much work is needed (and seldom done) to show whether some really complex scientific theory really says what it seems to say.
The Verifiability Principle of Meaning
Although religion deals with many questions that have no answer, many serious philosophers would say those questions are actually meaningless. They prefer to spend their time on questions that have meaning. That isn’t the same thing as saying the religious questions aren’t important, just that they can’t be answered by any theoretical observation.
If you think about it for a minute, you will realize that scientists should also concentrate on questions that have actual “meaning,” in the sense of the verifiable principle of meaning.
That probably sounds like complex nonsense but is very straightforward and easy to understand. When applied to scientific research, it is the same thing as saying that results must be replicable by other scientists.
The Scientific Mentality
How does a scientist think? By that I don’t mean does he/she use math or behave like Mr. Spock, completely detached from real life.
For me, scientific mentality is simply the predilection to suspend belief in the reasons for or cause of some event until some appropriate evidence is provided. Just what evidence is “appropriate” depends on the nature of the proposition. But some widely agreed upon evidence must be available or potentially obtainable for propositions to be considered in the realm of science.
The same mentality is also at the heart of all philosophy, although the subject of enquiry is different. So philosophy and science are much more closely related than most people think. In fact, they can be viewed as two branches of the same discipline.
Definition of the Verifiability Principle of Meaning: a question is only meaningful if there is some conceivable way to determine the correct answer.
Note that the definition doesn’t say the answer can be determined, just that it may be possible to find proof. It also fails to address the truth of the statement. A false statement can be as meaningful as a true one.
For example, in science, the question “Did God create the world?” is meaningless unless you define in detail four terms—“God,” “create,” “world,” and, surprisingly, “did.” The latter because if you don’t know how to define causality or time well enough to determine if something happened before, after, or simultaneously (impossible according to Einstein), how can you begin to separate cause and effect?
In science as well as most philosophy, a question is only considered meaningful if you can define some evidence which, if it were found, would show the question is true (or false).
Epistemologist and philosopher of science Arthur Pap famously pointed out that a belief is merely a prejudice unless you can describe some evidence you would accept as disproof.
It may come as a surprise to you to learn that most “laws” of nature are, at best, only partially confirmed hypotheses. Consider the “universal” law of gravitation, or the laws of thermodynamics, or most other scientific “laws.” A key element of all of them are that they are presumed to be universal. Hence we think distant galaxies are moving away from us because of the red shift, not sitting still with respect to the solar system because light gets “tired” over long distances and light quanta lose energy—another way of saying they shift toward the red or less energetic end of the spectrum.
We can’t know if those laws are truly universal because we can’t test every possible circumstance.
We can say they are highly probable—not, by the way, in the same sense that quantum mechanics deals in probabilities.
We can also say that statements about natural laws are “meaningful” because we can think of evidence that would disprove them. But we haven’t ever seen that evidence even though we can easily describe it. For instance, an energy system becoming more organized when isolated would disprove entropy. Finding a non anti-matter body that was repelled rather than attracted would redefine gravity.
What is a Theory?
Nothing is more basic to science than the concept of theory, but most people have no real understanding of what it takes to be a theory.
For example, “It works in theory but not in practice,” is a patently false statement. It’s a meaningless string of words to a scientist or philosopher of science, not because there is no possible evidence that would prove or disprove it, but because of the definition of a theory.
A real scientific theory must meet several criteria, including a highly relevant one: to be considered true, a theory must be true for every observed event. Hence, if it doesn’t work in practice, even once, it is no longer accepted as a theory.
A major point of concern in scientific circles is whether any data are being excluded by researchers. Everyone does it when some wildly out of range result is seen. In gambling, that is known as stacking the deck. Many published studies can’t be replicated despite the fact that other researchers are relying on those results. Another problem arises when you look for negative results, that is, studies which either disprove some piece of research or theory, or which fail to confirm. Those studies aren’t “sexy” and are seldom published, perhaps leading other researchers to waste time and money duplicating them.
Another defining characteristic of any theory is that it must generate predictions that can be evaluated using some possible test that would prove it true or false. That is where a theory is closely related to the philosophical concept of meaningfulness.
Here is a concrete example. According to Einstein’s theory of gravity, a beam of light passing close to a massive body will bend.
It took a lot of work, but this was observed in photographs of stars near the edge of the sun taken during a total eclipse. That combination of prediction and verification turned Einstein’s hypothesis into theory.
That was a valid and highly useful test because it lent weight to his other theories and greatly increased our understanding of nature.
But sometimes an experiment is most useful if it doesn’t produce the expected results.
In fact, sometimes one of the worst things that can happen in science is to get exactly the results you expect when you do an experiment.
A very expensive example of this recently occurred when the scientists at CERN found the Higgs particle and it was almost exactly the same as predicted (mass between 120 and 130 Gev).
You might think that would be cause for great celebration, but after the initial congratulations came the realization that spending nearly $10 billion didn’t really tell us much we didn’t already know.
Unlike the gravity light shift confirmation of Einstein’s hypothesis, which was the first real test proving a new theory, finding that the Higgs Boson existed and was exactly as expected was merely one more confirmation of a widely accepted theory known as The Standard Model. The Higgs results mean that the sub-atomic particles I memorized in the late 1950s are still valid (although there are a lot more of them now).
The scientists at CERN are in the position of having spent a decade and billions of Euros building the world’s largest particle accelerator only to find what they expected. The result further confirms an already widely accepted theory but opens up no new avenues for research.
If they had failed to find the Higgs Boson or found it to have a significantly different mass than predicted, then a vast amount of new work would have begun to find a theory to replace The Standard Model.
Stringing People Along
Many scientists resisted string theory not because they had a better explanation of nature, but because, so far, there has never been a proposed test that could either verify or disprove it.
As such, string “theory” isn’t a theory—which isn’t the same thing as saying it isn’t correct or true. Instead it is untestable, a cardinal sin in physics.
String “theory” would more properly be called string “hypothesis,” that being the correct term for a conjectured explanation of a natural event.
Back to Newton
In addition to testability and repeatability, the other major criteria for a hypothesis to be accepted as a theory is that it should be “elegant,” a difficult concept to quantify. The phrase “I’ll know it when I see it” fits here perfectly. In addition to being elegant, a theory needs to be simple, another term of art that is impossible to define completely. An example is Newton’s laws of motion which, despite all the philosophical considerations showing the first law is a mere tautology, are still widely used. We also still refer to Newton’s “laws” despite knowing that they are only an approximation of Einstein’s theories. Why? Because they are elegant.
By now many readers may have noticed that there are a lot of very specific conditions comprising scientific method, theories, and experiments. But how do we know they are the right conditions? Because they work? That’s circular reasoning again.
One major problem with science as we know it today is that, in general, scientists only find what they are looking for. Like the police detective who makes an early assumption that a certain person is guilty of a crime and focuses entirely on proving that, a scientific hypothesis greatly narrows the potential experiments that will be constructed and restricts the expected results. Experiments are designed to measure something. Any other data that the same experiment might generate are ignored because there was no attempt to collect that data.
So even if an hypothesis is completely off-base as far as the reasoning behind it (such as thinking gravity is a force rather than merely the shape of space), it could still lead to a proven theory (e.g. Newton’s laws of motion) because it described too narrow a range of predictions which were then tested and found to support the theory.
A different hypothesis could have led to entirely different predictions and tests.
Junk Science?
Some philosophers and scientists even question the entire basis of modern scientific research because of the social structure of the scientific community and because the research is too narrowly focused.
In 1962, physicist and historian Thomas Kuhn (“The Structure of Scientific Revolutions”) said, among other things, that “scientists are a self-regulated guild that excommunicates dissenters” and is preoccupied with what he dismissively referred to as “puzzle-solving.”
Dr. Kuhn emphasized the importance of paradigm shifts in scientific advancement. Essentially, he pointed out that science doesn’t progress by small increments but by giant leaps or changes of paradigm such as that which occurred when physics accepted relativity and again when it accepted quantum mechanics.
Because science is very much a closed club, people outside the establishment normally are ignored as are those members of the club who propose ideas outside of the currently accepted paradigm.
Viewed in hindsight, it seems incredible to me that a radical hypothesis presented by a patent office clerk was even given a cursory glance by mainstream scientists. It was probably only considered because of a decade or more of experimental evidence that the current theories about the ether were wrong.
Obviously this only brushes the edges of the questions surrounding the validity of the current scientific paradigm, but one thing it means to me is that there is plenty of room for new ideas in science fiction, ideas which may seem like they are based on nonsense science. An atomic submarine (the Nautilus, captained by Nemo) was once considered an outrageous idea, and who besides Arthur C. Clarke thought it was practical to use communication satellites.
Scientific paradigm shifts are often the basis of great science fiction stories.
It seems unlikely that using today’s scientific theories we will ever have practical teleportation, faster than light travel, cold fusion, or anti-gravity. What we need is a completely new paradigm. Fortunately the history of science shows that we are likely to have another new paradigm and probably fairly soon.
Further Reading
The Science of Mechanics, Ernst Mach, 1919
Stanford Standard Encyclopedia of Philosophy
The Structure of Scientific Revolutions
John McCormick is a trained physicist, science/technology journalist, and widely-published author with more than 17,000 bylines to his credit. He is a member of The National Press Club and the AAAS. His full bibliography can be accessed online.