I wrote in my last post on Richard Dawkin's fine collection of science writing of hypothesis testing - a part of the scientific method which helps us to build our body of knowledge about the world through disciplined comparison. This is how it works. Scientists pit a hypothesis that says that what they think is not true (the null hypothesis) against an alternate hypothesis (their own). For example, if the scientist is testing the effect of a drug to cure the common cold, he might take two groups of people with nasty colds, give one the drug and the other a sugar pill. His claim might be that those who took his drug will sneeze fewer times a day than those who took the sugar pill. His 'null hypothesis' would state that there will be no difference between the two groups - that both will sneeze the same number of times after taking the pill. This is the result he does not want. If his experiment is successful, he cannot prove his hypothesis per se, because no one can test every instance of something, but what the scientist can do is reject the null hypothesis to some degree of certainty. It is up to not just this probability,but also to the experimental design to say that the reason the sneezing decreased was the drug and not something else. Our experimental design would have to account for other explanations, such as the fact that the cold ended naturally on its own - something science calls controlling the experiment.
Why would scientists work so hard to try to address an explanation other that the one we want? There are two reasons: one protects against our influencing the outcome in subtle ways in our experiment because we want success so badly (scientists fall prey to the same psychological pitfalls as any other human). The second is is one of logic and safety. Say you wanted to reach some books on a very high shelf and you had no ladder, but there was some wood, a hammer, and a few nails. You build a 4-step step-stool but you have no aluminum braces and only a single nail for each join. Would you a) proudly throw your finished stool down in front of the right spot on the shelf and jump on up to the top step, standing on your toes to reach that book as soon as it was finished or would you b) put some weight on it with your hands first, testing the stool before climbing on it? If you are like a scientist, you will opt for the second choice, preferring to break the stool you made with your hands than to impulsively leap to the top step and risk breaking your neck . It is more logical to attempt to break the stool safely than it is to assume that it is functional because you would like that book on the top shelf so badly. It is not that we want the stool to break, it is that we want to know with a great deal of certainty that it will not. So we try to break it by putting at least as much pressure on it as it would likely withstand under typical us.
That long explanation of hypothesis testing was a preamble to a point I wanted to make inspired by another couple of excerpts Dawkins chose in The Oxford Book of Modern Science Writing, these by biologist Francis Crick and zoologist Matt Ridley. That is: scientists are good at doing other things too. One strength of a great creative-scientific mind is the way it can use its body of accumulated knowledge together with its unique perspective to see patterns in the world that others have not seen. This begins with a sentence from Watson and Crick's 1953 paper on DNA:
'It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.'So creative scientists are also good at something that might be called informed musing. In fact most creative people of all kind are. I might add to Dawkins cautionary concluding sentence that a responsible reporter of science will always let you know which they are doing - testing a hypothesis or musing.
The quality of mind that enabled Watson and Crick to race ahead of their laboratory-based rival Rosalind Franklin is well demonstrated by that sentence, and it is shown again in the extract I have chosen from Crick's book Life Itself: Its Origin and Nature (1981). Watson and Crick were not only concerned with finding out how things actually are - although that was of course their ultimate goal. They also kept in mind the way DNA ought to be if it was to do its job as the genetic molecule, and this gave them a short cut, which was ignored by the painstaking Rosalind Franklin...it is in general true that deep cogitation on the way nature ought to be constitutes a good prelude to the eventual investigation of the way it actually is. Only a prelude, however: the ultimate test of an idea is not its elegance but how will it explains reality.
I have not read Matt Ridley's 1999 book Genome, but Dawkins describes its creative structure - 23 chapters, one for each of our chromosomes, each chapter extrapolating upon a theme inspired by what is known about the functions of that chromosome. If the excerpt is any indication, this makes for some rich cross-fertilization among genres. In the case of this chapter it is biology, physics, and information processing. The short chapter careens from Erasmus Darwin's 1794 prediction that living filaments may be shared precursor to all organic lifeforms - an amazing notion for its day! To the idea that lifeforms, unlike closed systems which proceed from states of order (requiring more energy) to states of chaos (requiring less),
...build packets of order and complexity called bodies but at the cost of expending large amounts of energy. In Erwin Schrodinger's phrase, living creature 'drink orderliness' from the environment.Isn't that fantastic? He goes on to connect the building of a body with the need for information. That information must be storable (the DNA of our genome) and must be both readable and replicable. Less than two pages later, Ridley speaks of the year 1943 as the time when diverse ingredients began to coalesce in a way that would eventually transform how we know our world. Here is what I mean by a unique mind being able, through its own peculiar fund of knowledge and its uniquely creative disposition, to see what no one else can. Ridley writes of what Watson and Crick were up to, the tortures Josef Mengele was enacting at Auschwitz, the work of Oswald Avery that prefigured the connection of DNA to heredity, and the brilliant Alan Turing, who was creating a computation machine that also could store information, modify it, read it to enact functions, and replicate it. What Ridley sees and writes of with panache is the excitement in disparate disciplines that prefigured the connecting of biological code to heredity. It's a visionary discourse that exemplifies what I wrote of in the last post inspired by Dawkins's collection, that strong feeling precedes scientific inquiry.
I have also continued my travels with another unique mind - that of Richard Bausch - in his latest story collection, Something is Out There. "Son and Heir" is a devastating portrait of dissolution of the aimless son of a college president. Lyndhurst, the son, is a pitiable character whose combination of pride and inherent mistrust leads him to make a series poor judgments. This mistrust is predicated on his having witnessed his respected father's infidelity toward his mother alongside the public continuance of their marriage - a hypocritical facade - to which Lyndhurst has developed over time his own facade of not caring. Here too we see information stored and replicated, this time in a maladaptive pattern, no doubt initially developed to protect Lyndhurst from hurt but, in the long run he is never able to modify his mistrustful picture of the world so that he might experience it as a place where effect can predictably follow cause. Bausch's portrait of Lyndhurst's anxiety and hurt are set in the context of the great power failure during the intense heatwave the Eastern half of the U.S. experienced several summers ago. He cooks them to the boil.