Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Corrupted Science: Fraud, Ideology and Politics in Science (Revised & Expanded)
Corrupted Science: Fraud, Ideology and Politics in Science (Revised & Expanded)
Corrupted Science: Fraud, Ideology and Politics in Science (Revised & Expanded)
Ebook717 pages10 hours

Corrupted Science: Fraud, Ideology and Politics in Science (Revised & Expanded)

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

A searing exposé of the misuses and misrepresentations of science from the time of Galileo continuing through to the present day, this new edition includes updates on the asbestos industry, the chemicals industry, the sugar industry, the agriculture industry (the abuse of antibiotics), and the automobile industry (lead in gasoline). The final chapter has been expanded to include the full-blooded assault on science mounted by the Trump administration.
LanguageEnglish
Release dateMay 1, 2018
ISBN9781947071032
Corrupted Science: Fraud, Ideology and Politics in Science (Revised & Expanded)

Related to Corrupted Science

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Corrupted Science

Rating: 4.15624984375 out of 5 stars
4/5

16 ratings3 reviews

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 5 out of 5 stars
    5/5
    An entertaining, educational, and depressing look at the lengths people will go to lie cheat and steal their way into false truths. It's important to remember that while the method of scientific discovery is in theory an excellent one, the humans tasked with executing that method are fallible creatures who require a lot of supervision and peer review. A great book for people who like history, factoids, and scientific weirdness. Also good for those who think that our current era of anti-science crusading is anything new.
  • Rating: 2 out of 5 stars
    2/5
    Disappointingly poor. A concoction of half truths and many opinions slanted to his view of what is correct. If he disagrees, then it must be wrong.After awhile, I jumped from one subject to another with no better results. One thing I fail to understand is his desire to use an alias. Definitely not worthy of passing on to the next reader..
  • Rating: 5 out of 5 stars
    5/5
    A wonderful book!

Book preview

Corrupted Science - John Grant

haven’t.

1

FRAUDULENT SCIENTISTS

The prominent UK surgeon Paul Vickers remarked in a 1978 lecture:

What the public and we [doctors] are inclined to forget is that doctors are different. We establish standards of professional conduct. This is where we differ from the rag, tag and bobtail crew who like to think of themselves as professionals in the health field.

High words, and arrogant ones, but colored in hindsight by the fact that just a few years later, in 1983, Vickers, in association with his mistress, was charged with—and later convicted of—murdering his wife Margaret, using professional conduct to do so: he poisoned her with an anti-cancer drug so that her death was initially attributed to natural causes.

Arrogance and hypocrisy are recurrent themes, although far from the only ones, in considering fraudulent scientists: it is as if some get so engaged in their own world that the external one becomes a sort of secondary reality, one in which events have an almost fictional status and where consequences need not be considered.

But there are plenty of other motivations for fraudulence in science.

In 1993 the physicist John Hagelin—three times US presidential candidate for the Natural Law Party—gathered together in Washington, DC, some 4,000 transcendental meditators with the aim of reducing the violent-crime rate in that city by 25 percent. Unfortunately, the police records showed that Washington’s murder rate for that year actually rose. Or did it? In 1994 Hagelin announced he’d done a proper analysis of the figures and, sure enough, they showed a 25 percent decline.

On the face of it, this might seem like an instance of the fraudulent abuse of statistics, but was that really the case? It seems unlikely. Far more probable is that Hagelin, unable to believe the results of his experiment, quite unconsciously read into the statistics what he wanted to see there.

The borderlines between fraud, self-deception, gullible acceptance of the fake, and the ideological corruption of science can be very blurred. In theory none of them should happen; in practice, all too often, they all do, sometimes in combination. Where does one draw the line between, say, self-deception and ideological corruption? The latter may be deliberate and self-serving, but it can equally well be a product of the same desire to have reality obey one’s wishes that seems to have driven Hagelin to derive an alternative message from those crime statistics.

All of the categories of scientific falsification are dangerous: people can die of them—either directly, as with fraudulent cancer cures, or indirectly, as when distortion of science about supposed racial differences reinforces irrational prejudices and leads to violence between peoples. The Nazis had their scientific proof that the Jews were subhuman, and so murdered them by the million. In eighteenth- and nineteenth-century North America the white man had proof of the subhuman nature of black and red races, so enslaved one and waged a campaign of near-genocide on the other. The pseudoscience of eugenics not only helped fuel the Nazis’ murderous spree but also, before that spree shocked sense into people, looked well set to initiate a culling in North America of the mentally ill, the socially inadequate, and, of course, those of lesser races.

We live in an age when the falsification of science, in particular the ideological corruption of science, has reached a new level of importance—the very survival of our species may be threatened by it.

Not all falsifications seem of such significance—although one could make the case that false belief in itself does considerable damage by way of a sort of intellectual pollution, a brain rot that hampers all our other efforts at progress, or by generating enough noise that genuine knowledge becomes obscured. What may start as a relatively harmless hoax or fraud can be compounded through human gullibility or self-deception to the point where it assumes an importance far beyond anything the original perpetrator could have conceived.

On a small scale, this is what happened in the late sixteenth century in one of the oddest spats in the history of science, the infamous case of the Silesian Boy. This child was born on December 22, 1585, and was discovered, when his teeth grew in, to be possessed of a golden molar. How could science explain this?

The best-regarded hypothesis of the time was produced by Professor Jacob Horstius of Helmstädt University, author of the definitive De Aureo Dente Maxillari Pueri Silessii (1595). Horstius’s hypothesis managed to conflate astrology with a medical belief widely current at the time: the notion that, if a pregnant woman became covetous of something she saw, the next time she touched herself on her body she would generate an appropriate birthmark on the corresponding bodily part of her as-yet-unborn infant. Horstius claimed the astrological conditions at the time of the Silesian Boy’s birth (under the sign of Aries with the planet Saturn in conjunction) were so favorable that the boy’s body began to produce not bone (as teeth were thought to be) but gold. The gold had manifested as a tooth because the boy’s mother, while carrying him, must have coveted something golden she’d seen and not long afterward stuck her finger in her mouth. Horstius went on to theorize, obscurely, that the tooth was a sign from Heaven that the Turks would be kicked out of Europe.

Another book, this time by Regensburg physician and alchemist Martin Ruland, emphatically seconded Horstius’s hypothesis. Others felt driven to write volumes in rebuttal, among them Duncan Liddel, who pointed out an elementary flaw in the astrology of Horstius’s proposal: the sign of Aries falls in spring, not in December. The Coburg chemist Andreas Libavius produced a book about not the phenomenon or the theory but the controversy itself.

It finally occurred to someone that it might be a good idea to examine the boy and his tooth. It was at this point that it was discovered the whole affair was a hoax: the parents had jammed gold leaf over the child’s molar.

Not many years later, precisely in order to avoid such embarrassments, Francis Bacon set forth his version of the scientific method, advocating the collection of empirical evidence before you advance hypotheses. For centuries people accepted the principle … but carried on much as before, believing what they wanted to believe and finding the proof where they could. It’s something we still do.

The idea that at least some blind people can distinguish different colors by touch dates back at least to the eighteenth century; there’s mention of it in Boswell’s Life of Johnson (1791). At that time it was believed minuscule differences in surface texture were responsible for the appearance of different colors; although these variations were too fine to be detected by most people, the nonvisual senses of the blind were known to be often more astute than those of the unimpaired. In different form, the idea reemerged in the nineteenth century, this time the underpinning being the notion that the different colors generated slightly different temperatures. One practitioner of the art in England was teenager Margaret M’Avoy, who was all the rage in 1816. A peculiarity of her uncanny ability which puzzled people was that it would function only in the light; in darkness her fingers were just as blind as anyone else’s. Her devotees helpfully explained that this was because everything looks equally black in darkness.

We wouldn’t be so easily fooled today, would we?

In the 1960s, when the fad for parapsychology was at a peak in the west, reports emerged from the USSR of various women who could see with their fingertips or other extremities. Several of the practitioners were soon exposed as frauds, and the credulous reports in western media tapered off rather abruptly when something very obvious was pointed out: stage magicians had been performing precisely similar feats for generations. Yet, where those reports were retracted at all, it was with relatively little fanfare, and so it’s still widely believed the women’s abilities were genuine.

The notion of the emotional life of plants can be traced to the polygraph expert Cleve Backster, the founder in New York, after a career with the CIA, of the Cleve Backster School of Lie Detection. In 1968 Backster published a paper in the International Journal of Parapsychology, Evidence of a Primary Perception in Plant Life, which claimed that, by in essence hooking up a polygraph to them, he’d been able to show plants possessed a primitive form of ESP: they displayed a reaction to the destruction nearby of living cells. Money poured in to help him establish the Backster Research Foundation, whose express purpose was to investigate further the ESP abilities of plants. The initial results from these researches were very positive, and it seemed Backster and his team had made a great breakthrough. However, other researchers couldn’t replicate the experimental results, and soon the claims were debunked. This did not stop the publication of several bestselling books on the subject of plant psychology—notably The Secret Life of Plants (1989) by Peter Tompkins and Christopher Bird. It seems Backster’s claims may have been the product of the fairly common phenomenon whereby perfectly rational researchers can unwittingly, and despite all self-imposed safeguards against bias, skew their results to favor their preconceptions. Yet some of the relevant books are still in print.

The two latter examples lead to another important tile in the falsification-of-science mosaic: the role of the print and broadcast media, which are almost always eager to trumpet sensational claims of the extraordinary and then, when the claims are shown to be bunkum, are near-criminally negligent in acknowledging the fact. This is frequently compounded in the modern era by the perversion that has grown up of the old idea of journalistic balance: the new faux-balance seeks to find a midway point not between two reality-based viewpoints but between a reality-based viewpoint (right or wrong) and one that is demonstrably false. The result in the minds of the audience—and, who knows, perhaps in the minds of the journalists too—is a fallacious perception that facts are somehow subject to debate. The attitude that everyone’s opinion is equally valid, no matter their level of ignorance or expertise—and certainly no matter what the reality actually is—is lethally dangerous in some areas of science.

BABBAGE’S REFLECTIONS

In 1830 Charles Babbage, best remembered today for his early work on the computer, published Reflections on the Decline of Science in England, and On Some of Its Causes. In this little book’s Chapter V there’s a subsection on the genesis of fraudulent science that’s as valid now as it was then—it’s almost a textbook-in-miniature of how fraud, deliberate or unconscious, can arise within the sciences.

There are several species of impositions that have been practiced in science, which are but little known, except to the initiated, and which it may perhaps be possible to render quite intelligible to ordinary understandings. These may be classed under the heads of hoaxing, forging, trimming, and cooking.

OF HOAXING. This, perhaps, will be better explained by an example. In the year 1788, M. Gioeni, a knight of Malta, published at Naples an account of a new family of Testacea, of which he described, with great minuteness, one species, the specific name of which has been taken from its habitat, and the generic he took from his own family, calling it Gioenia Sicula…. He gave figures of the animal, and of its parts; described its structure, its mode of advancing along the sand …

The editors of the Encyclopedie Methodique, have copied this description, and have given figures of the Gioenia Sicula. The fact, however, is, that no such animal exists, but that the knight of Malta, finding on the Sicilian shores the three internal bones of one of the species of Bulla [Bulla lignia] … described and figured these bones most accurately, and drew the whole of the rest of the description from the stores of his own imagination.

Such frauds are far from justifiable; the only excuse which has been made for them is, when they have been practiced on scientific academies which had reached the period of dotage….

FORGING differs from hoaxing, inasmuch as in the latter the deceit is intended to last for a time, and then be discovered, to the ridicule of those who have credited it; whereas the forger is one who, wishing to acquire a reputation for science, records observations which he has never made. This is sometimes accomplished in astronomical observations by calculating the time and circumstances of the phenomenon from tables. The observations of the second comet of 1784, which was only seen by the Chevalier D’Angos, were long suspected to be a forgery, and were at length proved to be so by the calculations and reasonings of Encke. The pretended observations did not accord amongst each other in giving any possible orbit….

Fortunately instances of the occurrence of forging are rare.

TRIMMING consists in clipping off little bits here and there from those observations which differ most in excess from the mean, and in sticking them on to those which are too small; a species of equitable adjustment, as a radical would term it, which cannot be admitted in science.

This fraud is not perhaps so injurious (except to the character of the trimmer) as cooking, which the next paragraph will teach. The reason of this is, that the average given by the observations of the trimmer is the same, whether they are trimmed or untrimmed. His object is to gain a reputation for extreme accuracy in making observations; but from respect for truth, or from a prudent foresight, he does not distort the position of the fact he gets from nature, and it is usually difficult to detect him. He has more sense or less adventure than the Cook.

OF COOKING. This is an art of various forms, the object of which is to give to ordinary observations the appearance and character of those of the highest degree of accuracy.

One of its numerous processes is to make multitudes of observations, and out of these to select those only which agree, or very nearly agree. If a hundred observations are made, the cook must be very unlucky if he cannot pick out fifteen or twenty which will do for serving up.

Another approved [recipe], when the observations to be used will not come within the limit of accuracy, which it has been resolved they shall possess, is to calculate them by two different formulae. The difference in the constants employed in those formulae has sometimes a most happy effect in promoting unanimity amongst discordant measures. If still greater accuracy is required, three or more formulae can be used….

In all these, and in numerous other cases, it would most probably happen that the cook would procure a temporary reputation for unrivalled accuracy at the expense of his permanent fame. It might also have the effect of rendering even all his crude observations of no value; for that part of the scientific world whose opinion is of most weight, is generally so unreasonable, as to neglect altogether the observations of those in whom they have, on any occasion, discovered traces of the artist. In fact, the character of an observer, as of a woman, if doubted is destroyed….

That last observation is pretty ripe. Babbage’s long-term colleague in his unsuccessful attempts to create a computer was Ada Lovelace. She wrote what can be regarded as the first computer program, and is thus generally accepted as a more significant figure in the history of computing than Babbage himself. There were plenty who doubted her character.

CHEATING

Even very distinguished scientists are not immune to the temptations of fraud. For some 1,500 years the western world regarded the geocentric cosmology of Ptolemy as the last word on the subject, and he was greatly admired for the way in which he had confirmed his theories by experiment. Even long after the Copernican Revolution made his cosmology outmoded, Ptolemy was still held in high regard as a scientist. It was only in the twentieth century that astronomers began to become more skeptical about some of his results: they seemed almost too good to be true … and in fact, on closer examination, they were. Further, it seemed improbable that he could have made some of the claimed observations at all. Once his stated results were fully analyzed, it was evident that a lot of his observations would make better sense had they been performed from about the latitude of the island of Rhodes. Ptolemy worked in Alexandria, but the great observational astronomer Hipparchus had worked in Rhodes a few centuries before him. Rather than make observations of his own, it seems Ptolemy spent his time in Alexandria’s great Library cribbing many of Hipparchus’s results and claiming them as his own.

The clincher came when modern researchers calculated the exact time of the autumnal equinox in the year 132 CE. Ptolemy recorded that he had observed it very carefully at 2:00 pm on September 25; in fact the equinox occurred that year at 9:54 am on September 24. Ptolemy was attempting to prove the accuracy of the determination Hipparchus had made of the length of the year; using as his base a record Hipparchus had made of observing the moment of equinox in 146 BCE, 278 years earlier, Ptolemy simply multiplied Hipparchus’s figure for the year’s length by 278. Unfortunately for Ptolemy’s credibility, Hipparchus’s figure was slightly off—hence the 28-hour disparity in 132 CE. And it clearly wasn’t just that Ptolemy adjusted the time of the observation to suit his theory; it was that he never bothered to make the observation. Had he found the equinox arrived over a day ahead of schedule he’d have been in a position to make an even better calculation of the year’s length than Hipparchus had—and, Ptolemy being Ptolemy, this would have been an achievement he’d have crowed about.

Ptolemy wasn’t the only cosmologist to cheat in this fashion. Galileo Galilei certainly never conducted some of the most important experiments he reported having done when investigating gravity; those experiments would not have worked using the materials available at the time, as was evidenced by the fact that many of his contemporaries were puzzled when they couldn’t replicate his results.*

John Dalton, famed as the propounder of the atomic theory of matter, is known to have fudged his experimental data in order to support that theory,** and even Sir Isaac Newton fiddled some of the mathematics in his Principia to present a more convincing case for his theory of universal gravitation. As Richard S. Westfall put it in Never at Rest: A Biography of Isaac Newton (1980), Having proposed exact correlation as the criterion of truth, he took care to see that exact correlation was presented, whether or not it was properly achieved…. [N]o one can manipulate the fudge factor so effectively as the master mathematician himself.

Another significant figure who may have faked what he’s best known for is Marco Polo, whose ghostwritten account of his travels in the orient, Description of the World (c1298), contains some curious lacunae: there’s no mention of chopsticks, foot-binding or even the Great Wall of China. At the same time, Polo did mention certain other Chinese innovations unknown to Europeans of his day, such as paper money and the burning of coal. What seems very possible is that he heard about these and other items on the grapevine via the traders with whom his family did business, and constructed the rest of his account around them. We may never know the truth of the matter. Another possibility is that his ghostwriter, Rustichello da Pisa—to whom Polo supposedly dictated his memoir while they were in prison together in Genoa—invented most of the Description out of whole cloth, tossing in, for the sake of verisimilitude, bits and pieces of genuine information he’d heard from Polo. Or maybe Rustichello did his best based on a somewhat incoherent narrative from Polo?***

The German physiologist Ernst Heinrich Haeckel is best known today for his long-defunct Biogenetic Law—the notion that ontogeny recapitulates phylogeny (i.e., that the physiology of the developing vertebrate embryo mimics the evolutionary development of its species). The importance of the Biogenetic Law in scientific history is frequently overstated; it has often been claimed, for example, that it played a major part in the evolutionary thinking of Charles Darwin, although a closer look reveals that Darwin carefully made no mention of the Biogenetic Law even while acknowledging Haeckel’s other, valid work in vertebrate physiology. Long after the hypothesis had lost all credibility, in the mid-1990s the UK embryologist Michael Richardson happened to notice that a particular diagram by Haeckel seemed curiously, well, wrong. This led Richardson and colleagues to conduct a more thorough examination, at the end of which they concluded Haeckel had extensively doctored his drawings of developing embryos, the main evidence supporting his Biogenetic Law, so as to fit the theory.

Louis Pasteur deceived in at least two of his famous demonstrations, those concerning the vaccination of sheep against anthrax and the inoculation of humans against rabies, in 1881 and 1885 respectively. In the first instance he pretended the vaccines had been prepared by a method of his own devising when in fact he’d had to fall back on a rival method devised by a colleague, Charles Chamberland. The fabrication in the second instance involved the experimental work Pasteur did, or claimed he did, with dogs before his first, successful inoculation of a human against rabies. That the rabies sufferer, Joseph Meister, was cured was strictly thanks to luck, largely unsupported by any canine experiments of Pasteur’s. On the other hand, without the treatment Meister would almost certainly have died anyway, so presumably Pasteur convinced himself he was taking a legitimate risk. In both instances the motive for Pasteur’s dishonesty appears to have had its roots in his desire always to present himself as the master-scientist.

TAKING CREDIT

Sir Charles Wheatstone was famed for his experiments in electricity; his name remains well known because of the Wheatstone bridge, that staple of the school science laboratory. One has to wonder if Wheatstone himself was actually responsible for it, going by an incident that occurred in the 1840s. The Scottish inventor Alexander Bain is little remembered today, but he seems to have been possessed of the most astonishingly fertile brain, most of whose formidable powers he turned toward the then exciting new field of electricity. Among his countless inventions were various electric clocks, telegraphy systems and railway safety systems, a way of synchronizing remote clocks, the insulating of electric cables, the earth battery, and even the fax machine; he constructed a functional version of the last of these, the electrochemical telegraph, to transmit images between the stations of Glasgow and Edinburgh.*

Developing such devices cost money, and Bain didn’t have much of it. The editor of Mechanics Magazine, hugely impressed by Bain’s work and sympathetic to his financial lack, arranged an introduction to Wheatstone. Bain came to London in 1840 and visited the distinguished scientist, demonstrating various items, including his electric clock. The older man was dismissive: these were wonderful inventions indeed, but no more than toys; the future of electricity lay elsewhere. Very fortunately, Bain ignored the advice and applied for patents anyway, because just a few months later Wheatstone demonstrated to the Royal Society … the electric clock, which he claimed to have invented himself!

Wheatstone then tried to have Bain’s various patents struck down; he was unsuccessful, but set up the Electric Telegraph Company, making pirated use of several of Bain’s inventions. His downfall came when he sought government funding for the company. The House of Lords held an inquiry at which Bain appeared as a witness. Wheatstone’s theft was revealed, and the company was forced to make financial restitution to the Scot and acknowledge him, not Wheatstone, as the true inventor of several of the devices upon which its business relied.

Of course, in the US there was no House of Lords to see justice done, and Bain’s inventions were stolen wholesale. He spent the latter years of his life embroiled in interminable legal actions trying to get some measure of compensation.

Edward Jenner, the Father of Vaccination, similarly claimed credit that was not his due—but got away with it. According to all the standard histories, Jenner realized that immunity to the killer disease smallpox might be conferred by inoculating people with the milder strain of the disease, cowpox, which commonly infected, but did not seriously discommode, farmers and milkmaids, who were well known to be less susceptible to smallpox than the general population. In 1796 Jenner inoculated an unwitting eight-year-old, James Phipps, with cowpox and then a few weeks later with smallpox, revealing that the cowpox had indeed made Phipps immune to the more serious disease. This was and is generally accepted as a magnificent medical achievement. In truth it was also an astonishing lapse of medical ethics: for all Jenner knew, the experiment could have killed the lad.

Or is that so? Jenner seems certainly to have been aware that over twenty years earlier, in 1774, the Dorset farmer Benjamin Jesty had successfully performed an exactly similar action on his family—in his case not as a scientific experiment but as an act of desperation, because the county was in the midst of a smallpox epidemic. Yet Jenner never acknowledged Jesty’s precedence, even while accepting a large reward from a grateful Parliament (in 1802); neither did his medical contemporaries, and almost without exception neither has history.

More recently, Alexander Fleming became famed for his discovery of penicillin and thus opener of the door for the host of antibiotics that have saved so many lives around the world. In fact, although Fleming discovered penicillin—in 1928, through accidental contamination of one of his culture dishes—and although he did name it and performed some desultory experiments with it as a possible bacteria-killer, he decided it was of little therapeutic value and moved on to other concerns.

A sample of penicillin was sent to the pathology department at Oxford University for use in an experiment; the stuff proved useless for that particular experiment, but was kept in case it might be handy later. A new professor took over the department, Howard Florey; he knew about penicillin but, like Fleming, assumed it had little medical potential. Not so Ernst Chain, a young Jewish refugee from Hitler’s Germany. A biochemist, Chain was fascinated when he came across a reference to Fleming’s discovery and investigations.

Fleming had discarded penicillin because of its instability: it was effective in killing bacteria, but only briefly. Assisted by Florey, his professor, Chain extracted a stable version, and the age of antibiotics was born. It was only then that Fleming realized the value of the discovery he had made years earlier. That discovery had been made at St. Mary’s Hospital, London, where he worked. A governor of St. Mary’s, the prominent physician Lord Moran, was a close friend of the newspaper magnate Lord Beaverbrook. It would be in the hospital’s interest to be associated with the antibiotics revolution, and so Beaverbrook focused the spotlight exclusively on Fleming—who, to his immense discredit, played along, treating Chain’s and Florey’s contribution dismissively.* This naturally infuriated Florey, who protested to the Royal Society and the Medical Research Council; but it was not in the interests of either organization to expose Fleming. Some restitution came in 1945 when the Nobel Physiology or Medicine Prize went to Chain, Florey and Fleming, but even then there was an injustice: anyone can make an accidental discovery; the real distinction for which the prize should be awarded is the realization of the discovery’s importance and the experimental genius in making it exploitable as a life-saver.

There have been several celebrated instances in which Nobel Prizes have been awarded to one scientist for what has been mainly the work of another. Perhaps the best-known example is that of Antony Hewish, who received the 1974 Nobel Physics Prize for the discovery and early investigation of pulsars—work that had largely been done by his student Jocelyn Bell (now Jocelyn Bell Burnell). Hewish certainly supervised and guided her research, and played a significant part in the interpretation of her results, so there’s no question of his not deserving at least a share in the Nobel accolade, and he himself has never sought to eclipse her glory; the fault in this instance lies with the awarders.

In two other cases of the mentor-student relationship, however, the behavior of the mentor seems more dubious. The US physicist Robert Millikan received the 1923 Physics Nobel for his work establishing the electrical charge of the electron. In essence, the relevant experiment involved measuring the charge on tiny oil droplets as they fell through an electric field, and calculating from the results to reach a figure one could deduce as being the charge an individual electron must bear. An experiment to attempt this using water droplets had been devised at Cambridge University but had given no good results (the water droplets tended to evaporate almost immediately); Millikan’s laboratory at the University of Chicago was trying in 1909 to refine this in hopes of getting more usable results. After months in which not much progress was made, a new arrival, graduate student Harvey Fletcher, had the idea of using oil droplets instead of water droplets. In Millikan’s absence on other business, Fletcher set up an appropriate experiment and it worked beautifully. Thereafter the two men collaborated to achieve the goal of establishing the electron’s charge, and they jointly prepared a paper to this effect that was published in 1910. However, to Fletcher’s dismay, Millikan cited university protocol regulations to insist the paper be published as by Millikan alone. That paper brought Millikan a Nobel Prize; it is to Fletcher’s great honor that he never expressed any ill will over the matter.

A similar controversy surrounded the award of the 1952 Physiology or Medicine Nobel to the US biochemist Selman Waksman for the discovery of streptomycin, the antibiotic that effectively conquered tuberculosis. In this instance the discovery had been made, albeit under Waksman’s overall direction, by his doctoral student Albert Schatz, and the two not only worked together thereafter on streptomycin but were listed as coauthors of the pertinent paper. Even so, the Nobel went to Waksman alone.

So far, so much like the later experience of Hewish and Bell. However, Waksman then felt entitled to patent streptomycin in his name only, charging the pharmaceutical companies royalties for its manufacture. These royalties were very substantial indeed, and, even though much of the money was plowed by Waksman back into science—establishing the Waksman Institute of Microbiology at Rutgers University—Schatz not unnaturally felt he should have a share, and sued. The justice of his case was obvious: streptomycin had been a joint venture, and so the rewards likewise should be joint. This wasn’t, however, how the US scientific community saw it: they were horrified any student should have the temerity to sue his mentor. All US scientific doors were firmly slammed in Schatz’s face, and eventually he was forced to emigrate to South America to find work. To this day Waksman is almost always given sole credit for what is more correctly termed the Waksman–Schatz discovery of streptomycin.

Far more complicated in motivation was the public humiliation heaped by Sir Arthur Eddington—during the first one-third of the twentieth century the titan of astrophysics, a discipline he had almost singlehandedly created—on the young mathematical physicist Subrahmanyan Chandrasekhar. In 1935 Chandrasekhar, working under Eddington, presented a paper which proved—so far as any theoretical prediction can prove—that stars of mass above about 1.4 times that of our sun will at the end of their lives collapse infinitely. This was the first scientific prediction of the existence of black holes. What Chandrasekhar did not know was that for some years Eddington had been working on a (largely nonsensical) Grand Universal Theory, which theory would be obviated entirely if stars could collapse into singularities. At the presentation of Chandrasekhar’s paper Eddington behaved abominably, making his young protege a laughingstock in the secure knowledge that none present would dare challenge the Father of Astrophysics.

Luckily Chandrasekhar refused to take this lying down, and persisted over subsequent years in maintaining the veracity of his calculations—which were verified independently, but covertly, by such pillars of the physics community as Niels Bohr and Paul Dirac. Meanwhile Eddington used his prominence to lambast Chandrasekhar at every opportunity, framing Chandrasekhar’s math as incompetent while himself using numerous fudges—cheating, in other words—in his doomed attempt to prop up his own hypothesis. The result was that Chandrasekhar’s prediction of complete stellar collapse was lost to astrophysics for something like four decades. Only in 1983 was he awarded his thoroughly merited Nobel Prize for this work. Leaving his personal affront aside, the disadvantage to astrophysics through the long delay in recognizing black holes was immense—and all because of Eddington’s stubbornness and the sycophancy of the rest of the astrophysics community.*

OF DUBIOUS HEREDITY

In 1865, just six years after the publication of Darwin’s On the Origin of Species (1859), the Austrian monk Gregor Mendel published the results of painstaking long-term experiments he had done with generations of garden peas. His concern was with the way in which characteristics were passed down from parent peas to offspring peas, and it soon became evident to him that these characteristics were embodied in the form of units that were essentially unchanging from one generation to the next: the offspring of (say) a tall and a short organism would not be all of medium height but, rather, some would be tall and some would be short. What made the offspring different from their parents was the shuffling of these various trait-determining units. What Mendel had discovered was the idea of the gene, and hence the whole foundation of the modern science of heredity.

The importance of Mendel’s paper was not understood until 1900, when Carl Correns and Erich von Tschermak-Seysenegg realized that, if this was the way inheritance worked in peas, surely so must it work in all organisms, humans included. There was a revolution in the biological sciences. Even then, biologists didn’t quite grasp the tiger they had by the tail: for decades it was assumed genes were somewhat abstract entities, codings rather than actual pieces of matter that could be examined and isolated. Only with the unraveling of the structure of DNA did it come to be appreciated what a gene actually is. Today we talk blithely of splicing and engineering genes; just a few decades ago such notions would have seemed science fiction.

Much earlier than this, in the mid-1930s, the UK statistical geneticist Sir Ronald Fisher analyzed Mendel’s published data afresh. He found nothing wrong with Mendel’s conclusions, and that many of his experiments could be easily replicated to give the expected results, but that in some instances Mendel’s figures were so precisely aligned to the theoretical ideal that they represented a statistical near-impossibility. There were too many spot-on results in Mendel’s experiments for credibility.

What exactly happened? Did Mendel, knowing he’d uncovered a truth about the way characteristics are inherited, fudge some of his data or even invent a few extra sets of experimental results to bolster his case? Was he unconsciously biased when recording his observations, seeing what he expected to see? Another possibility is that he did not in fact perform every part of every experiment himself, but had one or more assistants at work in the monastery gardens: it would be far from the first time an assistant skewed experimental results in order to make the boss happy.

This desire to please is frequently regarded, thanks to a book on the subject by Arthur Koestler, as the most probable explanation for the celebrated case of the midwife toad. The Austrian biologist Paul Kammerer performed a number of experiments in the first part of the twentieth century that appeared to confirm the Lamarckian idea of evolution through inheritance by offspring of characteristics acquired by the parents during their lifetime, rather than, or at the very least alongside, the theory of evolution through natural selection. The most dramatic of his results showed the apparent inheritance in normally land-breeding midwife toads, when forced to breed in water, of a callosity of the male palm that’s found only in water-breeding toads. In water-breeding toads this callosity helps the male hold onto the female during mating; it hasn’t developed in the land-breeding midwife toad because, basically, mating is easier on land. For it to appear as a congenital characteristic after just a few generations of enforced water-breeding implied that offspring were inheriting their parents’ acquired characteristics.

In 1926 G.K. Noble examined Kammerer’s specimen toad and found the dark patch which Kammerer had claimed as the relevant callosity was in fact due to an injection of India ink. Disgraced, Kammerer soon afterward committed suicide, and for a long while it was assumed he was a faker. It’s now thought at least possible that his assistant helped the experiments along.

So, were the other monks obligingly serving up to Mendel the results he wanted? Or, feasibly, were they bored by the tedious pea-counting assignments he’d doled out to them? Did they—not realizing the importance of the experiment—invent results so they could bunk off? Whatever the truth, it’s a remarkable instance of a breakthrough of paramount importance being made at least partly on the basis of faked experiments.

# # #

It’s a lot harder to blame an anonymous assistant in a subsequent affair that has some echoes of the Kammerer fiasco. This time the scientist at the center of it all was William Summerlin, working in the early 1970s at, successively, Palo Alto Veterans Hospital, the University of Minnesota, and New York’s Sloan–Kettering Institute. Summerlin’s field was dermatology, specifically the problem of skin grafting. At the time a major problem in any attempt at grafting and transplantation was the natural inclination of the recipient’s immune system to reject the grafted or implanted tissue. Summerlin was working on this using black mice and white mice as donors and hosts so the grafted skin would show up clearly. It was while he was at Sloan–Kettering that he was caught faking by an alert assistant, James Martin. Martin noticed something odd about the black skin transplant one of the white mice had received, and discovered it wasn’t a skin transplant at all but had been drawn on with a felt-tipped pen! In the ensuing investigation it was discovered Summerlin’s entire career of transplantation was full of dubious results, some seemingly the product of self-deception but most actually fraudulent.

In 1981–82 a similar tale emerged at the Department of Physiology at Harvard Medical School, where a young doctor called John Darsee had been researching into the efficacy of various drugs in the immediate aftermath of heart attacks. Caught faking one experiment, Darsee was permitted to continue his researches but now under close supervision. But not close enough. In the end almost all of the papers he’d published during a seven-year career had to be retracted, quite a few on the grounds that they were total fictions—complete with fictitious collaborators. The Darsee case became a cause célèbre in not just the scientific but the political world, primarily because of its poor—and slow—handling by Darsee’s superiors and the scientific institutions concerned. This matter of poor, slow reaction to scientific fraud has been, alas, typical of many cases that have emerged over the past few decades.

PILTDOWN MAN

Probably the greatest paleontological embarrassment of all has concerned a fossil assemblage that appeared to be the remains of a Neanderthal-style hominid. The assemblage was discovered in early 1912 in a gravel pit near Piltdown Common, Fletching, Sussex, UK, and was named Piltdown Man, Eanthropus dawsonii.

The historical stage had been set for the discovery. British paleontology seemed to have been in the doldrums for decades. While workers on the Continent were unearthing exciting fossil hominids, their UK counterparts were lagging sadly behind. Patriots were desperate to convince themselves that, even in the field of prehistoric hominids, British was best.

The solidity of this mindset among establishment scientists can be assessed by the reception given to Galley Hill Man. In 1888, at Galley Hill in Kent, a very modern-looking human skull was discovered by chalk workers and excavated by two local amateurs, Robert Elliott and Matthew Heys. After their initial examination of it, the skull found its way to Sir Arthur Keith, head of the Royal College of Surgeons. Basing his assessment on the stratum in which the skull had been found, Keith was swift to go out on a limb and say it was of Neanderthal vintage, if not older. This seemed to confirm to him the presapiens hypothesis of Pierre Marcellin Boule and others that humans indistinguishable from moderns had existed in eras of enormous antiquity. In fact, the skeleton in due course proved to be that of a Bronze Age individual shoved down unwittingly into lower strata by later chalk digging. Those who’d shyly suggested the fossil skeleton might be comparatively recent had been, at least at first, generally dismissed—if not ridiculed—as overcautious.

So, when amateur geologist Charles Dawson made his find near Piltdown Common, he was assured of at least a tolerant reception. His further searches were assisted by some quite eminent figures of the day, notably the distinguished geologist Sir Arthur Smith Woodward, who examined the skull and proclaimed it that of a hominid, and the French polymath and Jesuit priest Pierre Teilhard de Chardin. Their haul included a fossil hominid jaw and various bits of animals otherwise unknown to the English south coast.

Despite occasional skeptical comment, Piltdown Man was for over forty years accepted as the UK’s great paleontological discovery, although it was difficult to fit it into the accepted map of hominid evolution—indeed, that was the very reason Piltdown Man was so interesting.* Not until 1953 was the fossil shown—by J.S. Weiner, K.P. Oakley and W.E. Le Gros Clark—to be part of the skull of a relatively recent human being plus the jaw of a 500-years-dead orangutan, the remains having been stained to give the illusion of antiquity and to make them better match each other visually. The success of the deception owed less to the rudimentary nature of scientific dating techniques in the early twentieth century and more to the fact that no one really wanted to believe Piltdown Man wasn’t genuine.

The identity of the hoaxer is still not known for certain. In The Piltdown Men (1972), Ronald Millar points to Sir Grafton Eliot Smith, the distinguished Australian anatomist and ethnologist, who was in the UK at the time and who was well known for his sense of fun. In a famous essay, The Piltdown Conspiracy (1980), Stephen Jay Gould turns the spotlight on Teilhard de Chardin. However, Charles Dawson—who on occasion had actually been seen experimenting on the staining of bones—has generally been regarded as the prime suspect.

There is, though, yet a further wrinkle to the story. In the late 1970s, a box was discovered in the British Museum’s attic containing bones stained in the same way as those of Piltdown Man. The box had belonged to Martin A.C. Hinton, later the Museum’s Keeper of Zoology, but at the time a volunteer worker there. In 1910, he had been denied a research grant application by Smith Woodward; just two years later the Piltdown remains were unearthed. Revenge? Today, Hinton is often cited as a likely culprit.

However, this may be a misjudgment. A more probable hypothesis, backed by anecdotal evidence, is that Hinton was initially just an observer of the hoax, and suspected Dawson as the perpetrator. As he watched Smith Woodward’s claims for Piltdown Man escalate, though, Hinton decided to prick the bubble while at the same time making Smith Woodward look a fool; he therefore planted some extra remains that were all too obviously bogus: a reported example was a cricket bat hewn from an elephant’s leg-bone. To Hinton’s horror, however, when the cricket bat was unearthed Smith Woodward proudly announced it to the world as a prehistoric artifact of hitherto unknown type. At that point Hinton threw his hands in the air and gave up.

A decade after the Hinton discovery, yet another suspect emerged: William Sollas. He was fingered in a posthumously revealed tape recording made ten years earlier by J.A. Douglas. Both men had been professors of geology at Oxford University, Sollas at the time of the Piltdown Man sensation and Douglas succeeding him in the chair. Douglas could recall instances of Sollas purloining bones, teeth and similar items from the departmental stores for no specified purpose, and it’s a matter of public record that Sollas (like many others in the field) thought Smith Woodward an overweening buffoon, such as it might be a pleasure to expose through a prank. At this late stage it’s impossible for us to evaluate the importance of Douglas’s circumstantial evidence.

A further possibly relevant fact is that we now know Dawson had a secret career as a plagiarist. At least half of his critically slammed but popularly well received two-volume History of Hastings Castle (1910) was discovered in 1952 to have been copied from an unpublished manuscript by one William Herbert, who had been in charge of some excavations done at the castle in 1824. And nearly half of a paper Dawson published in 1903 in Sussex Archaeological Collections is reputed to have been lifted verbatim from an earlier piece by a writer called Topley, although in this instance the details are vague. But how could any evidence defame him when he had the supportive testimony of one Margaret Morse Boycott?

Mr. Dawson and I were members of the Piltdown Golf Club. Let me tell you this. He was an insignificant little fellow who wore spectacles and a bowler hat. Certainly not the sort who would put over a fast one.

ARCHAEORAPTOR

The November 1999 issue of the magazine National Geographic caused a sensation when it announced the discovery of a fossil that represented the missing link between dinosaurs and birds. In this article Christopher Sloan, the magazine’s art editor, proposed the name Archaeoraptor liaoningensis for the fossil, whose stated history was that it had been discovered in China and smuggled thence to the USA, where it was bought at the annual gem and mineral show (the world’s largest) in Tucson, Arizona, by Stephen Czerkas, owner of the Dinosaur Museum in Blanding, Utah. What Sloan didn’t mention in his article was that his own scientific paper on the fossil had been rejected by Nature and Science.

Czerkas did the decent thing and sent the fossil back to China, where it was examined at the Institute of Vertebrate Paleontology and Paleoathropology, Beijing. A researcher there, Xu Xing, soon spotted a strong resemblance between the rear half of the supposed fossil and that of an as yet unnamed dinosaur he’d studied; the front half, however, was quite different. Eventually the fraud was traced to a Chinese farmer, who had created the chimera by fixing bits of two different fossils together with very strong glue. The dinosaur whose rear Xu had recognized is now called Microraptor zhaoianus; the front part of Archaeoraptor was identified in 2002 as belonging to the ancestral bird species Yanornis martini.

Occasionally creationists pick on the Archaeoraptor fiasco as an example of how far evolutionists will go to bolster their theory; it is, so to speak, the Piltdown bird. The creationist reasoning seems to be that, if Archaeoraptor were a convincing transitional fossil between dinosaurs and birds, its existence would undermine their own claims that evolution between kinds is impossible. What they conveniently ignore is that there are other fossils that fill the bill; in fact, Yanornis martini, used for the front part of the fake, is itself a good example of a transitional dinosaur/bird fossil.

THE IQ FRAUDS

In the field of human intelligence, two major arguments have been going on for decades. The first of these concerns IQ tests, and the extent to which they offer—or can offer—an accurate assessment of an individual’s intelligence. The problem is that no one is certain exactly what it is that an IQ test is measuring, beyond the greater or lesser ability to do well in IQ tests. The first IQ tests were designed purely to determine whether or not certain borderline-retarded children would be able to cope with school. At this the tests seemed successful. Accordingly, enthusiasts seized upon them, expanded their scope far beyond the originators’ conception, and declared they were intelligence tests applicable to anyone. What was forgotten during all this was that no one had—or has—been able to decide exactly what intelligence is: putative definitions are countless, and none enjoys universal support.

Moreover, any IQ test will inevitably be colored by the cultural perceptions of the person(s) who devised the test. IQ tests are superficially not tests of knowledge (or, at least, shouldn’t be), but instead tests of reasoning facility; in fact, however, they rely upon elements that, while they may be absolutely basic knowledge in one culture, may not be at all so in another. The idea that the tests can be applied with geographic abandon has slowly withered; the idea of their applicability across cultural and/or social divides within the same geographical area has proved somewhat more resistant.

The second controversy concerns the matter of what governs the intelligence of an individual: is it a matter of heredity, or is it largely controlled by upbringing? The scientific debate over nature vs. nurture has overwhelmingly swung toward the nurture side of the argument, but controversy persists among media pundits and public. The issue is of importance because among those insistent that the major component of intelligence is inherited is a small but vociferous racist coterie intent on finding scientific support for their prejudices. The irony is that xenophobia, which is what many of them are displaying, is generally a trait of the least educated and least intelligent.

An extreme example of an argument in favor of the nature side of the nature/nurture equation is The Fallacy of Environmentalism (1979) by H.D. Purcell.¹ Purcell warns against the danger of treating all people as equal. Rather than dispassionately presenting the defensible case that heredity as well as environment plays a part in determining an individual’s ability, he attacks everyone who, he thinks, disagrees with him: they are liberals or even, horror of horrors, Marxists. The views of the latter aren’t important because, of course, all Marxists believe in Lysenkoism! (see pp. 343–351)

His real concern is, apparently, that researchers who come up with statistics which show that, say, one race does better than another at IQ tests are subject to critical attack. What Purcell cheerily ignores is that, since IQ tests are themselves of debatable validity, the statistics he’s so keen to defend are based on at least questionable data. As A.A. Abbie pointed out forcefully in The Original Australians (1964) on the subject of Australian Aborigines’ poor results in such tests, life itself is an intelligence test for desert nomads: pass it or die.

The degree to which IQ tests could be put to the service of racist conclusions was exemplified by The Bell Curve: Intelligence and Class Structure in American Life (1994) by Richard Herrnstein and Charles Murray. The authors believed IQ was at least to a certain extent inherited, and pointed out a strong correlation between an individual’s IQ and their eventual position in life: US society, they claimed, has to a large degree become stratified on the basis of IQ. A major flaw in their argument is again that IQ is certainly not a universally reliable indicator of intelligence. However, if we substitute intelligence for IQ into the authors’ argument, then their point that the strata of the US’s class-structured society correlate with intelligence has some limited truth (although Donald Trump, Jr. and Paris Hilton provide instructive counterexamples). But it is an inescapable fact that US society is also stratified along racial lines. The authors, ingenuously or disingenuously, pointed to certain results that showed African Americans on average doing worse at IQ tests than Asian Americans, with white Americans somewhere in between. There are very obvious cultural reasons for this (not least the matter of inherited wealth), which the authors failed to stress. The book created a cause célèbre, with people on one side attacking it as racist while those on the other seized on what they saw as scientific justification for their own racism: it was only right and proper that African Americans should be at the bottom of the heap. They were, oddly, less keen to infer that Asian Americans should be at the top of the heap.

As an aside, beauty is at least as significant a determinant as intelligence on where the individual ends up in the social hierarchy, yet there are few who would (publicly) promote the notion that the plain are inferior to the beautiful.

For decades the nature vs nurture debate was vastly complicated by the fraudulence of one man, a

Enjoying the preview?
Page 1 of 1