• icon
    Thanh toán đa dạng, linh hoạt
    Chuyển khoản ngân hàng, thanh toán tại nhà...
  • icon
    Miễn Phí vận chuyển 53 tỉnh thành
    Miễn phí vận chuyển đối với đơn hàng trên 1 triệu
  • icon
    Yên Tâm mua sắm
    Hoàn tiền trong vòng 7 ngày...

The Malaria Project: The U.S. Government's Secret Mission to Find a Miracle Cure

  • Mã sản phẩm: 0451467329
  • (42 nhận xét)
best choise
100% Hàng chính hãng
Chính sách Đổi trả trong vòng 14 ngày
Kiểm tra hàng trước khi thanh toán
Chưa có nhiều người mua - cẩn thận
  • Publisher:NAL; 1st edition (October 7, 2014)
  • Language:English
  • Hardcover:416 pages
  • ISBN-10:0451467329
  • ISBN-13:978-0451467324
  • Item Weight:1.42 pounds
  • Dimensions:6.25 x 1.35 x 9.25 inches
  • Best Sellers Rank:#1,377,074 in Books (See Top 100 in Books) #354 in Scientific Experiments & Projects #867 in Communicable Diseases (Books) #1,672 in History of Medicine (Books)
  • Customer Reviews:4.5 out of 5 stars 42Reviews
1,458,000 vnđ
- +
The Malaria Project: The U.S. Government's Secret Mission to Find a Miracle Cure
The Malaria Project: The U.S. Government's Secret Mission to Find a Miracle Cure
1,458,000 vnđ
Chi tiết sản phẩm

Mô tả sản phẩm

Product Description

A fascinating and shocking historical exposé, The Malaria Project is the story of America's secret mission to combat malaria during World War II—a campaign modeled after a German project which tested experimental drugs on men gone mad from syphilis.

American war planners, foreseeing the tactical need for a malaria drug, recreated the German model, then grew it tenfold. Quickly becoming the biggest and most important medical initiative of the war, the project tasked dozens of the country’s top research scientists and university labs to find a treatment to remedy half a million U.S. troops incapacitated by malaria.

Spearheading the new U.S. effort was Dr. Lowell T. Coggeshall, the son of a poor Indiana farmer whose persistent drive and curiosity led him to become one of the most innovative thinkers in solving the malaria problem. He recruited private corporations, such as today's Squibb and Eli Lilly, and the nation’s best chemists out of Harvard and Johns Hopkins to make novel compounds that skilled technicians tested on birds. Giants in the field of clinical research, including the future NIH director James Shannon, then tested the drugs on mental health patients and convicted criminals—including infamous murderer Nathan Leopold.

By 1943, a dozen strains of malaria brought home in the veins of sick soldiers were injected into these human guinea pigs for drug studies. After hundreds of trials and many deaths, they found their “magic bullet,” but not in a U.S. laboratory. America 's best weapon against malaria, still used today, was captured in battle from the Nazis. Called chloroquine, it went on to save more lives than any other drug in history.

Karen M. Masterson, a journalist turned malaria researcher, uncovers the complete story behind this dark tale of science, medicine and war. Illuminating, riveting and surprising,
The Malaria Project captures the ethical perils of seeking treatments for disease while ignoring the human condition.

Review

“The outbreak of World War II pushed malaria up the American agenda. Troops found themselves in many highly infected areas, including Africa, the southern Mediterranean and, above all, Asia. [Karen] Masterson’s analysis of the havoc caused by the disease and of the research effort by federally funded scientists and clinicians makes a compelling read. Her book is brimming with colorful characters—some admirable, some less so…. Masterson’s gripping tale unfolds seamlessly.” —The Wall Street Journal

About the Author

Karen M. Masterson is a former political reporter for the Washington Bureau of the Houston Chronicle who left newspapers to pursue her interests in microbiology. On a teaching fellowship at Johns Hopkins University, she stumbled upon the story in The Malaria Project while researching at the National Archives. In 2005, she won a Knight journalism fellowship to study malaria at the U.S. Center for Disease Control and Prevention in Atlanta and in rural Tanzania. She has a Master’s of Journalism from the University of Maryland and an MA in science writing from Johns Hopkins University’s acclaimed Writing Seminars. She lives with her husband and twin daughters outside Washington, D. C.

Excerpt. © Reprinted by permission. All rights reserved.

PROLOGUE

A decade ago I knew as much about malaria as I did about professional football—that is to say almost nothing.

This killer disease occupied little of my thinking as I chased down U.S. senators and congressional leaders and wrote daily political stories from Capitol Hill for the Houston Chronicle. Little did I know that I would soon cross the globe, go back to school, and spend hours in reading rooms flipping through archived boxes of moldy records. (One box at the National Archives smelled so strongly of ammonia I closed it and returned it to the counter unread.) I did all this to understand why malaria is still around. It is probably the most studied disease of all time, and yet it persists, even in the face of the hottest new science in Western medicine. It’s both preventable and curable. We’ve even deciphered its genetic codes. Still, it remains among the top killers of African children—at a rate of two per minute.

By a fluke, malaria crept into my intellectual pursuits. I had quit my job as a national reporter to explore my interests in science and medical writing, both of which I had done before coming to Washington. I accepted a teaching fellowship at Johns Hopkins University, where I studied the history of medicine, and I took a course in mining the National Archives to learn how to find its buried historical treasures.

There, in a hushed reading room of the agency’s annex in College Park, Maryland, I learned how to call up records from the building’s two million cubic feet of storage space. As an exercise, I searched for archived records on World War II blood plasma replacement studies involving Linus Pauling—two-time Nobel laureate for chemistry and peace activism. My call slips came back with a box marked with the correct record group number and bearing the letter “P,” for Pauling. But the contents were all wrong. Thinking his papers were mixed in under other headings, I thumbed through the entire box, reading through random letters.

One turned my blood cold.

The 1943 letter was to the Massachusetts surgeon general from a physician named George Carden. In it, Carden laid out what sounded like a sinister plan: Federal researchers would use blood transfusions and lab-raised mosquitoes to give malaria to brain-damaged syphilitics and schizophrenics held at Boston Psychopathic Hospital so new drugs could be tested against the resulting infections. No less than the war’s outcome was at stake, he wrote, explaining that the military desperately needed a new drug to counter malaria’s devastating attacks on U.S. forces in the South Pacific.

I reread the letter a dozen times before my blood warmed to the possibility that I had just stumbled onto a fascinating, if horrifying story. That night I went home and ran Google searches, which, back in 2004, turned up very little. The next day I returned to the archives and met with a specialist in World War II medical research. She helped me navigate the Byzantine reference catalogs used to pinpoint exact call numbers for relevant records. Over the next three days, I retrieved a dozen boxes from the bowels of the archives, each filled with letters, reports, and data sheets on the war’s antimalaria program—information that had been classified during and after the war, and, as far as I could tell, hadn’t been touched in decades. I also tracked down historian of medicine Leo Slater, who, at the time, was working at the National Institutes of Health. He had an unpublished manuscript on the war’s malaria-related work that tracked the involvement of American pharmaceutical companies—a history that has since been published by Rutgers University Press.

With Leo’s help, I slowly pieced together a fairly clear picture of what had gone on. The War Department and White House had launched a Manhattan Project–style program to find a cure for malaria, born out of wartime necessity and run by a small army of well-intentioned scientists, many of whom knew precious little about the tricky parasites they studied. All they knew for sure was that U.S. military leaders feared this one disease would force them to surrender to the Japanese—an unacceptable outcome in a war destined to determine the fate of the world.

I hunted for a published history or popular narrative on the subject, and found nothing. The more I searched the more I realized I was in uncharted territory. But to know whether this was something worth pursuing, I needed context and content. When I got to Hopkins that fall, I wrote my schedule to include public health courses that covered the epidemiology and medical history of malaria. I continued my treks to the National Archives—unearthing more and more documents—and took a microbiology course to understand the nature of microbial diseases.

Needing more background, I sought and was awarded a Knight Foundation public health journalism fellowship that funded me to work for three months in the Malaria Branch of the U.S. Centers for Disease Control and Prevention. In CDC’s labs just outside of Atlanta, I dissected mosquitoes, studied their salivary glands, and watched under a microscope the sticklike germs that enter the human body when a female anopheles mosquito bites for blood.

My teacher was Bill Collins, a seventy-six-year-old icon in malaria research who, at the time, was the only federally employed malariologist who had been at it long enough to remember what it was like to infect madmen with the disease. He did it in the 1950s and early 1960s for the Public Health Service at the South Carolina State Hospital, where drug experiments begun during World War II were continued.

He still used many of the techniques he learned from his state asylum days. The main difference was that at CDC he infected monkeys instead of humans. He recalled this with remorse. The “good old days,” as he called them, allowed scientists to observe malarial parasites in the blood of sick people. He and his colleagues gleaned data that helped the world better understand the microbes’ behavior and complex life cycle, ultimately resolving many mysteries that had shrouded the parasites in a type of protective secrecy. The intelligence gave scientists needed insights to develop ideas for a possible vaccine. That he gave a painful and potentially deadly disease to witless syphilitics posed no moral dilemma for Bill. Up until penicillin became available in the 1940s, the madness that came with untreated syphilis could be reversed by malarial fevers—as the fevers ramped up the immune system to kill syphilis spirochetes interfering with brain function. Two decades after penicillin, Bill still used this outmoded malaria treatment—which worked on only a fraction of cases, the rest having no benefit—in South Carolina. He was doing God’s work, trying to find a final solution to one of the world’s most menacing diseases, until an absence of syphilis patients and ethical concerns shut him down in 1962.

Four decades later, his mission hadn’t changed; it just got a lot more difficult. Even though monkeys are our cousins, they are hard to infect with human malaria. So Bill and his colleagues raised parasites to infect primates, and then ran experiments that attempted to extrapolate the extent to which a given drug or vaccine might also work against human malaria.

At CDC’s run-down campus on Buford Highway—miles from the glitzy labs that characterize its cutting-edge work on AIDS, Ebola, childhood obesity, and other hot topics—I was trained in the intricacies of manhandling these dangerous microbes. My education started in the insectary, where I helped a Kenyan researcher named Atieli breed colonies of the world’s most efficient malaria-carrying mosquitoes. We separated pupae from larvae in a screened-in room set at eighty-four degrees and 80 percent humidity. Just before the pupae molted into mosquitoes, we caught them in netting, put them in small cages with gauze lids, carried them to the primate house, and turned them over to technicians, who pressed the cages against the shaved bellies of monkeys sick with malaria.

After the mosquitoes gorged on infected blood, the insects went to Bill’s lab to be placed in refrigerator-size incubators. There the insects stayed while the parasites in their guts sexually reproduced, leaving behind tiny egg sacs. In less than two weeks’ time, the sacs burst with offspring that were circulated into the insects’ salivary glands. There they awaited an opportunity to escape—which happened whenever the mosquitoes sank their needlelike noses into the flesh of warm-blooded mammals.

I sat next to Bill every morning, watching him handle spherical cardboard cages that held hundreds of highly infectious mosquitoes. They hung inside like razor stubble, bursting into black clouds every time he tried to catch them. This involved pushing a rubber hose through a small opening in the cardboard, with the other end of the hose in his mouth. With a single breath, he’d suck in five or six mosquitoes and then blow them into a small glass jar filled with chloroform—which knocked them out cold. That’s when I’d pick up one with tweezers, place it under a magnifying machine, chop off its head, and pull away the gut. Then, under a stronger microscope, he counted the egg sacs to determine the “parasite load” of each batch of mosquitoes. Knowing the load helped Bill determine which were the most likely to induce infections in monkeys primed with experimental vaccines. In this way, CDC ran the first line of tests on potential vaccines developed by the U.S. military and private pharmaceutical companies.

This process was anything but safe—for man or beast.

The monkeys’ lives were obviously at risk. They got hit with unnaturally high doses of parasites. If the experimental vaccine or drug being tested was bad or ineffectual, the poor creatures often died. This agitated Bill, but not out of compassion. He was a scientist, which by definition meant he controlled his emotions toward his animals. But an avoidable death angered him. The monkeys were expensive—as much as $5,000 each—and increasingly hard to buy. To Bill, even the most tattered and overused primate could be good for one more experiment. Losing even one to an ill-conceived vaccine preparation concocted by dim-witted drug companies bugged him to no end.

But the process was also dangerous for people. One day in early 2005, while Bill captured mosquitoes infected with a type of human malaria called Plasmodium vivax—the same one that crippled World War II forces in the South Pacific—one escaped and bit his lab assistant Doug. He fell sick two weeks later with a high fever and screaming headache. When I asked Doug about it, he shrugged. Sure, vivax malaria is horrible. He was seriously ill for more than a week. And he had to report the incident to disease investigators at the CDC. But contracting the disease is a rite of passage worn like a badge of honor, with a history as old as malaria research. Turn-of-the-century photos show professors with cages of mosquitoes strapped to their bare chests, hoping to induce an infection. In this tradition, Doug didn’t think of this mishap as a problem. CDC’s administrators, however, did. They immediately ordered another layer of screening between Bill and the rest of the world.

Shortly thereafter I joined Bill in the lab. One morning we were dissecting mosquitoes infected with a malarial parasite called Plasmodium inui. As Bill sucked the insects into his tube, one flew off over my head. I already knew P. inui was a type of monkey malaria capable of jumping species and infecting humans. As I nervously watched the escapee make a U-turn in my direction, I tapped Bill on the shoulder to show him we had a problem. In true Bill Collins form, he looked up from his microscope, spotted the mosquito, and said, “Gosh, ya hate to see that.” He then hunched back over his machine, as if enough were said.

I’d been at CDC about a month and tried to assimilate with the culture there, something I’ve always been good at. But this was different. I couldn’t be like the fellow researchers; I couldn’t be casual about malaria. I had no interest in living through chills intense enough to rattle my teeth and fevers high enough to damage my organs. I sat paralyzed, scanning the air, knowing that a featherweight bug could at any moment inject devilish microbes into my blood. Next to me, Bill went about his business, clicking slides of mosquito gut under the long arm of his microscope. When I finally locked onto the mosquito’s flight pattern and lunged forward to slap the life out of it, I saw the corners of Bill’s mouth turn up in a grin—which I took as tacit approval. No matter how skittish my actions looked to seasoned malariologists, I assumed Bill breathed relief: His reporter friend would walk away from his lab without the drama of contracting malaria.

Years later I learned the true meaning of Bill’s grin. P. inui, I discovered while researching this book, could indeed jump species, but only in rare circumstances, usually requiring an artificial concentration of the microbes injected directly into the bloodstream. Mosquitoes usually are incapable of transmitting a high enough dose of P. inui to infect people. The day I read that enlightening journal article I smiled, and remembered the privilege of having spent time sitting next to Bill.

Bill was a round man with bad hips and white hair. His acerbic humor, when he was tickled, would send his voice into a pitch high enough to pass for a girl’s. But he also was funny and happy and easy to spend time with. He was extremely generous to anyone who cared enough about his disease to ask questions, and always gave good thought to even the most basic and simplistic inquiries. He wanted everyone to understand his disease, especially the history of it. He hated that malaria in the 1990s got so little respect from global funding outlets, like U.S. AID and the World Health Organization. He remembered malaria’s heyday, back when developed nations still had malaria, and, after the war, when U.S. AID and WHO made malaria eradication their top priority. In 1970, when the effort failed and funding dried up, malaria experts were forced to switch fields. There just weren’t enough jobs for all the people trained during the eradication era.

Back when Bill was forced to stop his work at the South Carolina State Hospital and move it to CDC, the disease control agency was still flush with cash for malaria research, so this was a good move for him. After 1970, however, malaria jobs there were halved and then halved again. As the agency grew and expanded into stunning new glass buildings with high-tech machines for researching other diseases, malaria moved into the dumpy old buildings. There was no room for the insectary, so it was housed in a trailer on-site.

For anyone researching this disease, Bill was a treasure. He taught me how to induce a malaria infection using methods that had hardly changed since the 1940s. He displayed an attitude toward human experiments that gave me insights into the U.S. Public Health Service’s action during World War II. My own anachronistic view of the time was quite critical: The very agency dedicated to the public’s good health had reached into state hospitals and penitentiaries to infect the insane and imprisoned with a disease that, at the time, had no cure. The nuances, however, are far more interesting than the headlines. Bill lived those nuances until the early 1960s when he was forced to stop giving malaria to syphilis patients in South Carolina. Bill’s personal experiences with and deep knowledge of the war’s program—backed by a roomful of his own archived materials—neutralized my biases. And I hope it has allowed me to fairly and accurately retell this seventy-year-ago story, for which there are few remaining witnesses.

*   *   *

FOR a more current view of the disease, I talked my way into a CDC trip to Tanzania with the head of CDC’s Division of Parasitic Diseases and one of the Malaria Branch’s top investigators. Within days of leaving the United States, we were hours from the nearest pavement, bouncing over deeply rutted dirt roads and creeping slowly toward hard-to-reach villages.

In Tanzania, the perils of trying to impose Western ideals on the African bush were obvious, especially with respect to malaria. Scientists, PhD candidates, and public health graduate students devised elaborate drug-delivery schemes, with the single hope of reaching people in straw shacks elevated above mosquito-infested floodplains, miles from roadways and clinics. On one outing, a Tanzanian researcher and I abandoned our Land Cruiser to bushwhack through tall grasses, looking for three families. We were out where the occasional life was still lost to a hungry lion or rampaging elephant. Waters were low, so the mosquitoes weren’t bad. But it was still malaria season, and our survey told us that all three families had young children probably in need of treatment.

Out there, I thought about the Herculean effort we made to reach these families, which included manpower for an entire day, a full tank of gas, wear and tear on an expensive vehicle, and thousands of dollars in survey data designed to instruct us on where to go and whom to look for. Was this the legacy of the World War II malaria program? I wondered. Did the world’s first and largest effort to produce a synthetic antimalaria drug lead to this?

The answer, as it turns out, is yes. The World War II program set up a new infrastructure for malaria drug development, creating a colossal shift in the way the world perceived the problem. The United States, for example, went from attacking malaria as a public health challenge that involved lifting people from poverty and getting them off mosquito-infested lands, to seeing it as a problem of drug treatment and delivery. A pill came to replace decades of hard work and coordinated programming, resulting in a paradigm shift that has endured years of skepticism and serious scientific setbacks—including the inevitable resistance that malaria parasites develop against every new drug brought to market.

Out in the bush I stayed in an area with some of the highest malaria transmission rates in the world. An affable and well-known Irish entomologist there—Gerry Killeen—measured exposure rates in surrounding villages as high as two thousand infectious mosquito bites per year, per person. In other words, infectious mosquitoes occupied every part of life. They swarmed the outhouses, guesthouses, meeting rooms, shebeens, restaurants, and even the hospitals and clinics. They darted at your face, attacked your ankles, and stung the thin skin on your fingers and temples. On the walls and drapes of hospital malaria wards, hundreds of anopheles mosquitoes rested after feeding on the sick—soon to flit off and infect others.

While I was there, mosquitoes in my nightmares stung at my eyes until I awoke in a fit. I’d calm down after realizing I was in my small guesthouse, tucked safely under a pristine mosquito net. I figured out soon enough that these subconscious fears were a luxury of Western living. People in the bush survive every day under a blanket of mosquitoes, barely noticing hundreds of daily bites. Far bigger problems haunted their nightmares, like whether they’d have enough water and food for their children. I saw people use bed nets—a frontline defense against malaria—to fish, carry laundry, and cover windows. When I asked individuals why they abused these lifesaving devices, why they would risk causing tears and rips and compromising the nets’ usefulness, most told me what they knew a mzungu—a white person—wanted to hear: They loved their nets and slept under them every night. But the reality, as explained to me by educated villagers working for Western-funded antimalaria projects, was that the nets were more of a curiosity. Many were never used for their intended purpose.

The attitude toward antimalaria drugs posed an even bigger conundrum. Where I visited, multiple disease loads routinely stole life from families. Parents said saving their children would require an accumulation of wealth sufficient enough to buy their way into the cities—where better housing, water quality, and health care are available. Their only option was to try to earn cash through surplus farming. But too often crops failed from bad weather, or illnesses made family members too sick to bring in the harvest. The parents I spoke to, who survived in mud huts with no hope of employment, were reluctant to spend precious cash on malaria drugs, especially when they knew the disease would return every time the waters ran high. Better to get sick, establish enough resistance to fend off malaria’s deadliest symptoms, and save precious cash for bigger problems.

For a better perspective on what I considered an irreconcilable difference between drug delivery programs and the needs as expressed by African people, I interviewed scientists working in the bush. I talked to entomologists, parasitologists, epidemiologists, biostatisticians, and public health experts. While each had theories on how best to fight malaria—combination drug therapy along with insecticide-treated bed nets seemed to be the favorite, with many advocating for a reintroduction of DDT (the banned insecticide made famous by Rachel Carson’s Silent Spring)without exception they conceded that malaria is a disease of the poor and can’t be beaten without economic development, job creation, higher standards of living, and improved health care delivery.

They said getting people out of malaria-ridden floodplains would save more lives than drugs, bed nets, and DDT combined. But this would require new infrastructure, good governance, better resources, and outside investment—all of which are hard to come by in Africa. Public health advocates, by necessity, focus on what’s within their power, especially the development and distribution of new drugs. And perhaps one day they’ll even deliver a workable and efficacious vaccine. The goal, as they see it, is not to eradicate the disease, or even treat everyone who gets it. Their job, as defined by international funders, is to save young children. This is a perfectly reasonable design, since the first malaria infections are the most intense. Without having lived long enough to establish a low level of immunity from constant exposure, children under five are the most likely to die. Saving them is the best Western antimalaria programs can do, given the circumstances. But these strategies will never eradicate the disease—they will never pull this scourge from its roots. And as long as mosquitoes are present to pick up the parasites from sick adults, children will remain vulnerable.

*   *   *

THE Malaria Project is an intensely human story, one about individuals who dedicated their lives to beating a disease that is older than mankind and likely to survive longer than the human race. It has touched most us one way or another. My personal story dates back to 1993, after I left my job as a legislative aide on Capitol Hill (first for U.S. Senator John Glenn and then U.S. Representative Tony Hall), and before I went to journalism school. That year I accepted a scholarship to study political science at the University of Cape Town in South Africa. After writing exams, I left the city to backpack through ten countries of southern and eastern Africa in a life-changing trip that reached its pinnacle while I sat with gorillas in the jungles of what was then Zaire.

To fend off malaria, my travel partners and I took a drug called Lariam, and endured side effects that included paranoia and hallucinations. While in Malawi we decided the drug was worse than malaria could ever be, so we stopped taking it—a naive and dangerous decision, as it turned out. I was lucky. Apparently my daily dousing of mosquito repellent protected me. But one friend wasn’t so fortunate. She suffered all the classic symptoms—headache, chills, and intense spikes in fever. But her dilated pupils, heavy breathing, and utter listlessness were the most worrisome. A clinician in Nairobi told us we got her treatment just in time; if she had developed cerebral malaria, she could have easily slipped into a coma and died. She was cured by way of an intense round of chloroquine, the “miracle drug” to emerge from the World War II malaria project and deployed soon thereafter in the global eradication effort. Its use, and overuse, triggered resistance to it in many parts of the world, but chloroquine still worked in certain parts of Africa when I was there.

That adventure changed my life and is at least partly responsible for my taking on this book, which put me on so many unexpected paths. This book could have been written from any number of perspectives. I chose to build the narrative from archived records, and draw context from journal articles and academic histories of World War II and of malaria. What is clear from these sources is that this drug program begun during the war created a legacy that explains why our present antimalaria efforts are so heavily weighted toward drug and vaccine development, instead of mosquito abatement, poverty alleviation, and jobs programs—the approaches used to eradicate the disease in the malaria-plagued U.S. South during the first half of the twentieth century. This story is designed to help readers understand the links between past and present, and demonstrate how public health as a discipline needs more than drugs and other magic bullets to succeed—something of which public health professionals are painfully aware.

INTRODUCTION

Many before me have written the natural history of malaria, from prehistoric time to today. So I won’t do that here. Suffice it to say that malaria has been around since the dinosaurs and remains entrenched in many parts of the world where the right combination of poverty, mosquito species, and climate persist—which, incidentally, include nearly half the planet. Each year it kills a half million African toddlers; infects up to a half billion people; and costs already impoverished countries billions of dollars in lost workdays and health care.1 WHO is now shouldering its second major attempt to eradicate it with tools not much different from those used in the first attempt, which began right after World War II when the organization used two breakthrough weapons discovered by wartime researchers: the insecticide DDT and the antimalaria drug chloroquine.

This is the story of how these weapons came to be. To get at it, certain facts about malaria need restating. The fun facts aren’t so relevant but are immensely interesting. For example, malaria has inspired fear at least since men started writing—as evidenced by ancient Babylonian tablets that depict a mosquitolike god of pestilence and death, and Hippocrates’s twenty-four-hundred-year-old medical teachings, which include a description of malaria’s crippling fevers and his hypothesis that they were caused by vapors rising up from polluted waters.2Malaria also killed many famous men, including King Tut and Alexander the Great; it scared Attila the Hun out of attacking fever-ridden Rome; it weakened Genghis Khan and probably contributed to his death; it caused great ancient armies to lose wars, annihilated early civilizations, forced medieval farmers to leave crops rotting in fields, and, right up to World War II, presented a force so uncontrollable that military strategists planned to have a second division to replace each one taken down by fever.

The more relevant facts for this story are linked to the different species of parasites that cause the unmistakable symptoms of malaria. Those symptoms are reliable, hideous, and memorable. They start with violent chills, followed by furnace-hot fevers and a burning thirst, then exploding headaches, nausea, stomach pains, and vomiting, and finally sheet-soaking sweats that end with a disquieting delirium. In the kind of malaria that dominates in Africa, the symptoms are worse and often end in death. These include retinal damage, severe anemia, convulsions, coma, and renal failure (nicknamed blackwater fever because the urine turns black right before death). The parasites accumulate so quickly that they block the narrow blood vessels to the brain. The effect is visible as the lack of oxygen to the eyes turns the retina white, and death soon follows.3

Malaria is unforgiving and, until the twentieth century, was just about everywhere.

All the different species of parasites that cause malaria belong to the genus Plasmodia. They infect many different animals, including primates, birds, bats, and lizards. The four that infect man are called P. (for Plasmodium) falciparum, P. vivax, P. ovale and P. malariae. Another one, a monkey malaria called P. knowlesi, may soon join the human four, as it is showing signs of jumping species. Of these human malarias, two cause almost all infections in man: falciparum and vivax.

Less than thirty minutes after a mosquito spits Plasmodia parasites into a person, the microbes can disappear into the liver, where they stay for a week or so, protected from the body’s immune system. They feast on cells and multiply profusely. Each stick-shaped microbe becomes thirty thousand more liver-stage germs, gathering strength in numbers, hidden, giving little sign of their presence. Then, when they are ready, they burst from the liver to hunt for hemoglobin—this is called the blood stage. P.vivax, the most widespread of the human malarias, cozy up to our red blood cells disguised as friendly molecules and trick a receptor protein called Duffy into letting them inside. Once there, they devour our hemoglobin and expel the iron as waste. Each turns into another thirty or forty. Their bulk ruptures the cells, spilling out new armies of this blood-stage germ, plus thick clouds of iron-based toxic waste that our busy white blood cells (the body’s vacuum cleaners) try to suck up. But the parasites keep coming; every other day an exponential number burst out of used-up red cells to march on new ones. They repeat the process again and again until billions of them overwhelm our phagocytes, triggering the immune system’s last resort: high fevers.

A small fraction of these marauding microbes change, yet again, into the so-called sexual stage. They are now reproductively distinct females and males, and they must get into a mosquito to procreate and produce offspring. These “shrewdly manipulative parasites” swim to our surface tissue and alter our blood chemistry to actually attract mosquitoes; that is, they send a “bite me” signal that increases by 100 percent their odds of being drunk in by nearby mosquitoes.4Those taken are the chosen few; they escape their human host for the mosquito’s gut, which is the only place they can sexually fuse. Their offspring are later delivered to new human hosts when the mosquito bites for blood. This innovative survival strategy makes them among the most successful germs on the planet.

Falciparum is the most dangerous of the four human malarias, responsible for more than 90 percent of the world’s malaria-related deaths.But vivax is the more enduring, causing relentless relapses, sometimes for years. Vivax does this by launching attacks from the liver in different groups, like regiments. Some stay behind, dormant and protected from the immune system. Just as the host begins to feel better, as his or her red blood cell supply recovers and the malaise subsides, parasites still hidden in the liver awaken and launch another attack. Or maybe they wait a few months, or years—depending on that strain’s survival strategy (which is linked to mosquito breeding seasons). Eventually the process starts anew with chills, fevers, burning thirst, head and muscle aches, severe vomiting, and malaise. It happens again and again and again—sometimes over many years—until the infection finally burns out.

*   *   *

MALARIA crippled U.S. soldiers, marines, and sailors during World War II. Nearly half a million of them got sick, with many, many thousands presumed to be over their infections, only to be sent back to the front lines to fall again and again and again from relapses and recrudescence. The early battles in the Pacific boiled down to which side had the means to replace fever-stricken troops. In Bataan, Japan had a clear advantage; in Guadalcanal, after months of uncertainty, the Americans finally gained that upper hand.

Recent genome work on vivax and falciparum produced pretty good road maps that help explain why they are so different, why they are so menacing during times of war, and why they affect different races of people differently—a feature that both stumped and fascinated malaria researchers during World War II.

The genome work suggests that some two hundred thousand years ago—about the time humans appeared on the scene—vivax found its way to tropical Africa to become the greatest killer of early man.5 It created enormous selection pressure that strongly favored a random mutation in the Duffy receptor—that surface protein vivax uses to gain entry into red cells. People who had this changed lock, which vivax’s key no longer fit, survived malaria and passed the trait to offspring, who were now also better equipped to survive vivax. Eventually a large enough percentage of the population had this mutation that vivax dried up. This was a direct genetic hit that made many African people completely immune to vivax, a biological advantage that continues to prevent it from infecting a vast majority of Africans and people of African descent.6

Vivax had been forced out of Africa. So the parasites adapted. In the great human diaspora, they found hospitable hosts in Caucasians, Persians, and Asians. These races weren’t Duffy negative; their red blood cells still had Duffy receptor keyholes that vivax easily unlocked. To survive in cooler climates, they “learned” to hide in the human liver for longer and longer stretches, launching their attacks on red blood cells in time for summer mosquitoes to drink them in—somehow sensing that mosquitoes had begun biting.

The exodus of vivax might have ended Africa’s malaria woes but for the appearance of falciparum. Some scientists believe this parasite originated as a bird malaria—and probably a dinosaur malaria—that jumped species millions of years ago to infect African primates. Then, some six to ten thousand years ago, as Africans cut down rain forests to build large settled communities, it jumped to humans. This long lineage makes falciparum the most ancient of the human malarias—as far as we know.7

What this shows is that vivax and falciparum adapted well to animal species through our evolutionary processes. This rapid adaptability is seen today as a signature feature of Plasmodia, and is the main reason it continues to be one of the most successful disease-causing protozoa on Earth.

Falciparum is so deadly because the parasites can invade most red blood cells, compared to vivax, which infects only the young ones (reticulocytes with the Duffy receptor). Untreated falciparum can cause severe anemia because it’s so destructive to blood. The infected red cells can also clump together. So while vivax-infected cells easily pass into the spleen to be destroyed, clumps of falciparum-infected cells do not. They remain in the blood, taking up residence in the heart, lungs, liver, brain, kidney, and, in pregnant women, the placenta.8They clog capillaries, which prevents blood from getting from tissue to the veins. The brain and cerebral fluid go from pale pink to dirty brown. Blood flow and oxygen to the brain cut off so quickly that a child playing games before lunch can be dead by dinner, with almost no warning.

Survival pressure from falciparum favored another random mutation. This one was linked to red-cell protein production. But unlike the direct hit of the Duffy mutation, this was only a partial hit. The mutation shifted the shape of red blood cells, from nice doughnuts to a sickle shape. Children inheriting the trait from both parents were protected from falciparum, but their red blood cells were poor carriers of oxygen and they often died from sickle-cell anemia. Those inheriting it from just one parent didn’t develop anemia and were protected from falciparum’s deadliest infections. Those people lived longer and passed the trait to their children.

Today, 10 to 40 percent of sub-Saharan Africans—depending on the region—have this single sickle-cell trait, giving them partial protection. Between this trait and a natural immunity achieved by constant exposure to falciparum, most adult Africans are able to survive routine attacks of their local strains. The disease is quite deadly, however, in pregnant women, who lose their acquired immunity during gestation, and young children, who have not achieved it yet. In some areas of Africa, the childhood mortality rate from falciparum is as high as 30 percent, while adults with acquired immunity experience it as if it were a routine flu—that over time permanently damages organs, causes anemia and malaise, and contributes to lower life expectancies.

*   *   *

MALARIA’S success has made it one of if not the most important diseases of the modern age. Through the centuries it struck seasonally, causing far more illness than death, so it never conjured the hysteria—or headlines in history books—of the other big killer microbes. It was no Black Death, which wiped out half of Europe’s population in the 1300s (and may have been a type of plague or a relative of Ebola—the organ-liquefying, hemorrhagic disease of Richard Preston’s bestseller The Hot Zone). The Black Death, bubonic plague, smallpox, Spanish flu, and other rapid killers were not sustainable over time; they burned through populations then disappeared, sometimes for centuries. By contrast, Plasmodia didn’t kill as readily, which gave man time to adapt. This won the microbes long-lasting, worldwide success.

Over many tens of thousands of years, these microbes and man formed a “truce” that was more beneficial to the microbes. “What the pathogen wants is a very long-term relationship with the host; it’s a smarter approach,” explained Wolfgang Leitner, a malaria expert with the U.S. National Institute of Allergy and Infectious Diseases. Malarial parasites create crystallized hemoglobin that is ingested by macrophages—our vacuum cleaner–like white blood cells. This process cleverly paralyzes these cells, instead of killing them. So no signal goes to our immune system to produce more, and we end up with crippled macrophages, less able to fight off the infection.

No one understood any of this until recently. Throughout most of human history, people didn’t even know mosquitoes carried malaria; they thought it was carried in foul air rising up from swamps, as Hippocrates had suggested. People felt helpless against the fevers that swept over them every time spring turned to summer. They shuttered windows to block the vapors, not knowing that the still air made it easier for malarial mosquitoes to sneak in late at night for a blood meal.

As the millennia passed, people continued to respond to malaria at a genetic level, selected for conditions that include a blood disorder called thalassemia and deficiency of an enzyme called G6PD. These immunological innovations—caused by random mutations—helped populations survive endemic malaria. Scientists today can often determine whether our ancestors lived in highly malarious regions, and sometimes even pinpoint exact locations, simply by examining our blood.

*   *   *

THE mosquitoes that carry malaria are as relevant to this story as the microbes themselves. This is because fighting the disease in our blood has proved nearly impossible, so many strategies take aim at the vectors, which are a specific type of mosquito called anopheles. Of the more than three thousand mosquito species, about 375 are anopheles. Only about seventy of them carry malaria. And only about forty carry the microbes well. But those forty are just about everywhere. They complete the microbes’ life cycle by giving them somewhere to breed.

And breed they do, inside the mosquito’s gut. If they never make it to the gut, they die off in our blood, end of story. But to be sucked into a mosquito is to be given the opportunity to procreate and spawn progeny that, with enough luck, find their way back into humans to feed on red cells and make their way back into mosquitoes.

As these microbes complete their life cycle in the mosquito, they produce new, genetically distinct strains that slip into our blood by way of mosquito saliva. In this way, mosquitoes have spawned countless strains of unique malarias. Each settled—like microscopic homesteaders—in its own geographic region of people and mosquitoes.

Falciparum strains, without the ability to hibernate in the liver, needed constant access to mosquitoes to keep from burning out. This geographically confined them to places with year-round mosquito breeding—mainly regions nearest the equator. They were particularly successful in sub-Saharan Africa because the mosquito vectors there lived long and could carry heavy loads of parasites.

*   *   *

MANY biological historians believe that trade, and especially the slave trade, allowed these microbes to spread across the globe. With each shipment of enslaved Africans came trillions of falciparum parasites. If they were introduced to a region during the mosquito season, they’d launch brutal though short-lived outbreaks. One season of falciparum was deadlier than a dozen seasons of vivax. No one knew the cause of the fevers. The anopheline carriers were night feeders with a gentle touch. Gyrating females (male mosquitoes don’t bite; they drink nectar) quietly fed while people slept. They injected parasites with their syringelike noses while extracting the blood they needed for reproduction. Sleeping victims were none the wiser until a little more than a week later, when they’d suddenly feel bone-deep cold, then violent shakes, and finally signature fevers.

Throughout history, humans did the best they could to prevent these cyclical, often deadly fevers. From the Middle Ages to the Industrial Revolution, malaria rode on waves of trade, slavery, and warfare, planting roots across Europe and Asia, intensifying around the Mediterranean basin. A hundred miles of Pontine Marshes plagued Roman life, producing robust outbreaks that killed many popes, cardinals, emperors, explorers, conquerors, and poets—including Dante in 1321 and Lord Byron in 1824. The dreaded fevers eventually made their way into the medical vernacular under many names, including the ague, putrid fever, intermittent fever, the shakes, and paludisme—French for marsh fever. But the name that stuck came from the Italian words mala aria, meaning “bad air.”

Malarial parasites set sail for the New World in the blood of European explorers—mostly vivax but some falciparum in the people of southern Europe. They created intermittent outbreaks, but no lasting presence. That didn’t occur until the Portuguese established sugar plantations in Brazil and began the first large-scale movement of African slaves onto the continent. Africans’ full immunity to vivax and partial immunity to falciparum kept them healthy on mosquito-infested plantations. While native laborers fell with fever, Africans stayed relatively strong and produced huge profits for European overlords. Malaria helped draft what historian James L. A. Webb Jr. calls an “economic and social template” that spread throughout the Americas.9 The fact that Africans could work harder and longer than indigenous peoples boosted the cross-Atlantic slave trade.

English settlers seeking religious refuge founded Jamestown in the mid–seventeenth century with vivax in their blood. The New World mosquitoes picked up the parasites and passed them back and forth between pilgrims and new colonists, establishing a malaria zone up and down the East Coast, from Massachusetts to Florida. When slaves were ripped from their homes in West Africa to work Southern cotton fields, they carried falciparum,which, because of the climate, launched only intermittent infections. Still, this added heavy death tolls to seasonal outbreaks, making malaria “the most important killer in the North American colonies,” according to Webb.

No one understood why these fevers spread so fast. Everyone simply shuttered the windows in a futile attempt to stop the ague from wafting in with the night air. An ominous dread hung over encampments and settlements whenever the rains ended and temperatures rose.

The microbes traveled deep into America’s heartland in the blood of mercenaries, making their way across the U.S. South and up every major waterway, all the way to Canada. Plantation owners and white laborers caught fevers from mosquitoes that bred in the cotton fields, while Africans stayed healthy. The proslavery movement used this as evidence that God had created black people for slave labor.10 The fact that Africans had genetic protection, evolved over thousands of years, would not be understood for another two centuries.

Native Americans died so rapidly of measles, smallpox, and other European diseases that historians have a hard time measuring the impact of malaria, though they assume it was significant.11

*   *   *

IN the eighteenth and nineteenth centuries, improved microscopes allowed clever researchers to see tiny creatures—which they called animacules. This launched the so-called germ theory of disease. Alchemy became chemistry. Sorcery became science. The hunt to find microscopic monsters led to great discoveries by men like John Snow, Joseph Lister, Robert Koch, Louis Pasteur, and many, many more. Stains to color germs made them easier to see, and more discoveries were made. Public health campaigns reduced the reach of terrible killers like cholera, tuberculosis, and anthrax poisoning. The world’s first vaccines wiped out smallpox. New methods of pasteurization reduced microbial contamination of milk and wine, and sterilization reduced bacterial infections in hospital operating rooms. Science scored major victories against these enemies, and scientists became superheroes.

Progress against malaria, however, stood still. Discoveries of other parasites had to lead the way so that researchers could comprehend this particular monster. In 1878, Sir Patrick Manson, a prominent Scottish physician studying tropical diseases in China, did just that. He spied tiny worms in the blood of patients who had developed hideous deformities, then saw the same worms in dissected mosquitoes. Clearly, mosquitoes delivered them to man. But scientists at the time still believed that female mosquitoes took a blood meal, released eggs in waterways, and died. Manson assumed his newly discovered worm, which he called filariae, was released into water that people drank. Once inside the bloodstream, the worms grew in size to gum up the lymphatic system and cause severe swelling of the extremities—especially the legs, arms, and scrota. The disease was called filariasis, and these extreme symptoms elephantiasis.

This got Manson thinking that malaria might also be spread by way of dead mosquitoes in drinking water. But no one had spotted a parasite in the blood of malaria-infected people. He encouraged his protégé, Ronald Ross, to look for evidence in support of his malaria hypothesis. Meanwhile, other prominent scientists from France, Italy, Spain, and the United States also attempted to explain malaria, including a claim out of Germany in 1878 that the cause was a newly discovered bacterium, Bacillus malariae, located in dirt near swamps. This, of course, was wrong.

Then, as often happens in science, one person stumbled onto an important and defining clue. That was French army doctor Alphonse Laveran, who was sent to Algeria to examine the tissue of soldiers killed by malaria. The men were in North Africa to enforce French rule on the people of Constantine and other nearby cities. Mortality rates as high as 30 percent weakened the army. At that rate, the French feared they would lose hold of the region.

Through two years of devastating death tolls, Laveran performed many autopsies, looking for the killer malaria microbes. He peered down the arm of his microscope, seeing what others had already seen: tiny black specks forming clouds around ruptured red blood cells. He was as stumped as the others, straining to figure out what they were. Then, in 1880, he tried a new approach. He took fresh blood by way of a standard finger prick from a soldier who was sick but still alive. In this blood smear Laveran saw the unexpected. There swam large microbes the size of the red blood cells themselves. Some looked like slugs, while others wriggled with whiplike tails. Not believing his eyes, he pricked another feverish soldier’s finger, and another, and another, until he found one with the same gyrating germs.

He had unwittingly captured the sexual phase of falciparum. He didn’t know that the slug-shaped ones were the females and those with spermlike flagella were the males. He didn’t know—nor would anyone figure out for another eighteen years—that these sexually charged parasites had swum to the soldier’s surface tissue to be drunk in by biting mosquitoes. But Laveran didn’t need to understand their whole story. He identified them as the cause of malaria and threw popular theories into a tailspin.

This didn’t happen overnight. Laveran, slow in speech and motion, lumbered through his report, carefully sketching images of his parasites. He couldn’t explain how they got into the human bloodstream. Nor could he identify a nucleus, which would have confirmed that they were indeed parasites, not just misshapen blood cells. And he couldn’t explain what the organisms meant. The report he submitted to the stodgy elders of the prestigious French Academy of Medicine was disregarded by some as fraudulent or, at best, misguided.

But the drawings and techniques intrigued enough influential people to eventually convince Manson, Ross, and a few others that Laveran had found something important. Another decade and a half passed before Giovanni Battista Grassi, a well-known Italian entomologist and scientific giant on all matters of germ-related medicine, finally accepted the value of Laveran’s findings. By 1895 the race was on to figure out what these germs were and how they entered the human body.

An important clue came to light in 1897, when a Canadian enrolled in the first year of Johns Hopkins University’s medical school made a discovery. The student, W. G. MacCullum, observed parasites in blood he took from a sick crow fusing into what looked like an egg sac. He presented his paper to the British Association for the Advancement of Science, suggesting that this might be how malaria parasites procreated once being sucked in by mosquitoes. Manson forwarded the paper to Ross in India, who had just found similar parasites in the gut of certain brown mosquitoes.

This helped Ross hone his work. He raised mosquitoes in a lab and fed them on malaria-infected birds, then dissected the insects to see what the microbes did inside the mosquitoes’ guts. Hundreds of dissections later he had pieced together the mystery. Extrapolating from bird malaria, he announced the path of human malaria. In 1902 his efforts won him the Nobel Prize in Medicine. As for Laveran’s part, he also received a Nobel Prize in Medicine in 1907, but it was for work on several types of disease-causing protozoa, not just malaria.

Other great scientists also contributed, especially Grassi from Italy, whose work was so pivotal he probably should have shared the prize with Ross. But that didn’t happen, and Ross’s name is the one everyone remembers. He went on to publish a small how-to manual called Mosquito Brigades, in which he advocated the spilling of oils and kerosene on swamps and marshes to wipe out mosquito larvae. He helped rid England of the disease. Then he headed south to colonial Africa, where he failed, because he underestimated the durability and sheer volume of mosquitoes there.

In this extreme setting, however, he saw the calculus that is malaria; that it’s not just a disease but a societal condition based on key factors that include a) the number of infected people; b) the volume of biting mosquitoes; c) the extent to which those mosquitoes have access to infected people during nightly feedings; and d) the extent to which the mosquitoes prefer human blood over, say, cow or pig blood. Today we call this calculus the BCRR—basic case reproduction rate. It measures whether enough unprotected people live among so-called anthropophilic mosquitoes (people biters, not cow or pig biters). Regions measuring 1 or below don’t meet the threshold for an outbreak—there aren’t enough mosquitoes biting unprotected people (North America, most of Europe, parts of Asia, and a few other regions). Places measuring between 1 and 5 have outbreaks but no permanent malaria (parts of Latin America, Asia and Eastern Europe, parts of the Middle East, and southern Africa). Regions measuring above 5 have endemic, year-round malaria. Parts of India and Southeast Asia, for example, measure between 5 and 10, while some areas of sub-Saharan Africa exceed 1,000—making their malaria the most durable in the world.

By the early 1900s, many well-off regions in northern climates eliminated malaria through mosquito abatement and treatment with quinine—the only known drug for malaria at the time. Socioeconomic changes in cooler areas with shorter mosquito seasons made breaking the infection cycle easier than in southern regions. The upper stretches of the United States and Europe were cleared of endemic malaria, with BCRRs hovering around 1. These regions had occasional outbreaks, when the right combination of mosquito breeding and infected people accidentally reached explosive proportions—especially during war. But for the most part, the days of worrying about malaria were over. That left southern Europe and Asia, the U.S. South, and the tropics. In these regions in the early twentieth century, malaria-carrying mosquitoes had to be tamed. And that is where this story begins.

CHAPTER 1

Lowell T. Coggeshall arrived in 1901, born to poor farmers in Saratoga, Indiana, where everyone bartered to survive.1 His parents sold eggs and butter to earn cash for medicine, fabrics, and a few other staples. Everything else they grew. Each season brought the same worries about weather and soil conditions, crop yields and pests, animal husbandry and diseases.2

This was to be Lowell’s future, but for the fact that he wanted no part of it.

Nor did he think he was any good at it.

For starters, he could never get the family’s ornery old workhorse, Bird, to pull a plow, so the two exchanged “whacks almost daily.” Prone to bouts of recalcitrance, he also did stupid things that earned a thrashing from his stern father, like the time he tossed sunflower seeds into a tree stump near the cornfield instead of planting them in the proper place. His father figured it out when the flowers sprouted out of the stump.

Even summer highlights—the July Fourth parade with decorated vets from the Spanish-American and Civil wars, and the annual twenty-five-mile horse-and-buggy ride to the Jay County Fair—barely held Lowell’s interest. And when the community banded together to thresh fields of wheat and oats, and butcher animals for the long winter ahead, he observed like an anthropologist, not a boy learning his trade. He was good-humored and friendly. And he did the work. But he didn’t enjoy it.

Nor did he like to study. Had the schoolhouses’ three teachers not been his first cousins, he might have skipped school altogether. But as it happened, in 1913—the year before the Panama Canal opened—he moved without fanfare from the first-floor elementary school to the second-floor high school. There he sat day after day, marginally interested in the offerings of rudimentary math, science, and grammar. He had no idea how he would escape this predetermined life; he just knew he would.

*   *   *

HE also had no idea that the Panama Canal had anything to do with his future. He might have followed what the newspapers said: that the French had abandoned the project in 1889 after twenty-two thousand workers died of malaria and yellow fever, and that the U.S. Army took control of it in 1904 and used mosquito-control strategies to finish the work.

But the story was so much bigger than that, and would one day change Lowell’s trajectory. For the Americans, at first, suffered the same devastating disease rates as the French, with malaria being by far their biggest problem. Initial campaigns to protect workers were sloppy and inefficient. Screens were improperly installed; loose floorboards and open eaves let in mosquitoes; and antilarvae campaigns were poorly targeted. Army physician William Gorgas had to struggle for the authority and funding to study and fix the problem. So he hired a key person, Dr. Samuel T. Darling—who would soon be a big figure in Lowell’s life. Darling trained an army of laborers to build screened-in “isolation cages” for the sick, construct new mosquito-proof buildings, and spray kerosene on breeding grounds of the region’s most efficient malaria-carrying mosquitoes.3 This had the added advantage of also wiping out yellow fever’s less robust mosquitoes. Workers stayed healthy long enough to finish the gargantuan engineering feat, delivering the United States an internationally important colonial gem—one that linked two great oceans and substantially shrank the world’s shipping lanes.

When the canal officially opened, newspapers splashed Gorgas’s face across the front pages. His grandfatherly good looks—thick white hair and mustache, perfect nose and cheekbones, and sharp uniform—made sanitation work sexy. The public saw him as a kind and wise national elder. Men looked up to him and grandmothers swooned over him. His celebrity brought attention and focus to the importance of sanitation work in keeping people healthy. And for that, just weeks after the canal’s opening, the army awarded him the coveted position of U.S. Army surgeon general.

From this pulpit he preached his public health religion with his most emphatic sermon on the virtues of mosquito sanitation—how that alone would wipe malaria from the planet in a few short years.

*   *   *

LOWELL showed no sign of being the least bit interested in or inspired by the great Gorgas, but the elders of the heavily endowed Rockefeller Foundation were. They decided to commit $100 million to sanitation projects designed to improve the general health of the world’s poorest people, creating the foundation’s International Health Commission—which grew to be the largest private-sector investor in health and medical research (on the scale of today’s Bill and Melinda Gates Foundation). Rockefeller had already wiped out hookworm in many places by breaking the cycle of infection, which could be done with outhouses and shoes: outhouses to stop infected people from defecating in the streets, and shoes to stop hookworms in the feces from boring into bare feet. Rockefeller partnered with local officials to bring these low-tech, low-cost technologies to poor communities. With the new $100 million infusion, and Gorgas’s great success in Panama, the IHC decided in 1913 to take on the U.S. South’s entrenched malaria problem.4

For there, every summer, severe malaria swept up the eastern seaboard from Jacksonville to Baltimore; reached across Virginia, the Carolinas, Georgia, and the Florida panhandle; almost completely subsumed Alabama, Mississippi, Louisiana, Arkansas, and Tennessee; struck north into southern Kentucky, Missouri, and Illinois, and west into Texas and Oklahoma. It even hit stretches of central New Mexico and California.

Quinine, the only known treatment, had been so overused it stopped working in many regions. And its use often led to side effects, like unbearable ringing in the ears and partial blindness. This was because the antifebrile property in quinine, which came from an exotic tree called cinchona, varied in intensity, depending on a given crop of the trees and the method of preparation. Doctors often misunderstood the concentrations with which they worked. Moreover, the drug worked only against acute attacks and did nothing to prevent or cure infections. This meant people treated with quinine often suffered relapses, as if they had never been treated. And they still carried malaria in their blood, which meant mosquitoes picked up the disease from healthy-looking people and spread it to others. No one fully understood how quinine worked, which created uncertainties that were a nightmare for public health officials to think through, and were a big disappointment for paying customers who took the drug, suffered the side effects, and still weren’t cured. But the sick had no other option. A Dutch-run cartel ran sprawling cinchona tree plantations on Java in the East Indies, producing, controlling and fixing prices on 90 percent of supplies. The world was their captive as long as quinine remained the only drug available.

Germans at the Bayer Company tried to change this. In the 1910s, they launched a major effort to synthesize quinine using man-made chemicals. Their dream outcome was to cure malaria with a pill, a “magic bullet” made in their lab, bottled in their factories, and then sold over-the-counter worldwide—like their blockbuster cure-all, Bayer Aspirin. Bayer would supplant the need for slow-growing, hard-to-maintain cinchona trees and put the Dutch out of business.

Bayer advertised that it was working on a miracle cure for malaria, one that was affordable and easier on the body than quinine. This added heft to an assumption that an inevitable drug or vaccine would wipe out or at least reduce the reach of this disease—as was the case for cholera, rabies, tetanus, typhoid fever, and bubonic plague. Edward Jenner’s smallpox vaccine by now was more than a hundred years old. Surely a true remedy for malaria would come next.

“Within a decade some biochemical product will be evolved, an antitoxin in the form of a vaccine or a serum, that will effectually dispose of the toxins for malaria,” wrote TheNew York Times on July 13, 1913.5 A month later Paul Ehrlich—the wildly eccentric father of German medicinal chemistry—fed this hope by making an outlandish announcement at the International Medical Congress in London: He said his arsenic-based, magic-bullet pill for syphilis, called Salvarsan, also cured quinine-resistant malaria. Even better, he said, Salvarsan could reverse the resistance. Malaria had met its match, was the message he delivered. The heady participants at the London lecture hall erupted with great applause. “He wore a dusty frock coat,” wrote TheNew York Times’ London correspondent, “and his voice was shrill rather than powerful. Such was the savant whom thousands . . . greeted with the sort of cheering that men grant a hero.”6

Ehrlich’s assessment of his own drug proved wrong, terribly wrong. Salvarsan had no such powers. It wasn’t even good against syphilis—the dreaded corkscrew bacteria that many called the “reward of sin” and that often infected the brain, turning regular people into madmen. But before Salvarsan there was only mercury, which everyone knew was a poison. So finding something better turned Ehrlich into a legend. Before making it, he was just a batty, made-fun-of lab assistant to the great Robert Koch. He arrived at the Koch Institute with a PhD he had earned using dyes to color tissue cells, and broke into microbe hunting by using those dyes to color germs. Then he tested a crazy idea by adding poisonous chemical side chains to germ-seeking dyes and shot them, like “magic bullets,” into the sick.7 Everyone snickered at his tussled appearance and boisterous lab manner. But when he came up with Salvarsan, his reputation was transformed; he became the Koch Institute’s genius of chemistry.

His strong opinion that a magic bullet could be made against malaria inspired chemists at Bayer Works’ laboratory in Elberfeld, Germany. They started with a drug originally conceived years earlier by Ehrlich, made of a blue dye. He knew the dye stuck to malarial germs because he used it to stain them for microscopy. With that in mind, he added a methyl group as a “side chain,” hoping it would kill the germs. Like “magic,” he concocted the first chemically derived poison pill against malaria. The concept was ingenious, elegant, and simple—a sign of how cleverly and originally Ehrlich’s mind worked when he thought through an attack strategy for germs.

But his drug, called methylene blue, turned urine green, made the whites of eyes blue, and was highly toxic. It couldn’t be used by anyone. The concept, however, proved valuable and became an obsession of Bayer’s. Chemists tried different versions of this concoction, rotating molecules around the central nucleus. To test each new rotation against infections, they injected canaries with a type of bird malaria, hoping that at least one of their new combinations would work. Hundreds, maybe even thousands of songbirds died in their cages, their little bodies slumped in piles.

*   *   *

THE Rockefeller Foundation’s health commission listened to the talk of magic bullets with skepticism. They were all medical doctors and entomologists, not chemists. Their labs were in the swamps, focused on anopheles mosquitoes—those ubiquitous malaria carriers—and the desperately poor people living in shacks, exposed to nightly bites. Many at Rockefeller believed they could end the South’s staggering malaria problem by using pyrethrum to kill mosquitoes and kerosene to kill mosquito larvae. This fit with Rockefeller tradition. It made sense to their physicians and insect experts, who were infused with a passion for “public health,” which meant serving communities by improving sanitation and curbing disease exposures—the philosophical alternative to “individual health,” which focused on magic bullets for the already sick. Rockefeller disciples saw disease prevention as holistic, not to mention cheaper. They were health officers, patrolling for methodologies and opportunities to improve housing, provide safe drinking water, ensure access to healthy foods, put shoes on children, build outhouses to keep feces off the streets, locate wells in appropriate places, and so on. Through this lens they saw an opportunity to prevent malaria by separating man from mosquito.8

It helped that the business side of the Rockefeller family—driven by bottom-line decision making—wanted the foundation’s public health investments to be results-oriented and cost-effective, while also contributing to better health care delivery systems (like the establishment of clinics with medical staff close to populations too poor or small to have a hospital nearby). Investing huge resources in chemical labs and drug manufacturing made no sense to them when sanitation could prevent diseases.9

Sanitation by way of killing mosquitoes worked, they believed. This conclusion evolved over several years, beginning with side-by-side studies during horrendous malaria outbreaks in the Mississippi Delta and next door in Arkansas. In the delta they used quinine; in Arkansas they sprayed forests with pyrethrum, spilled kerosene into swamps, did minor ditching, and nailed fine-mesh screens to windows and doors. The studies were deeply flawed. Doctors prescribed extreme doses of quinine to black sharecroppers in the delta, then claimed they were cured, when in reality many had fled to the North and were replaced temporarily by transient black families also heading north, and not yet infected with malaria. Meanwhile, malaria rates in Arkansas were said to have plummeted based on doctors’ calls, which were equally unreliable as a measure of infection rates. Then, in 1921, a sanitation officer with the U.S. Public Health Service, M. A. Barber, created an antilarvae cocktail called Paris Green by combining copper arsenite and copper acetate, “mixed with road dust and spread by a hand blower.” This worked extremely well against anopheles larvae, measurably devastating mosquito populations and indirectly reducing malaria rates. Rockefeller’s scientists thus concluded that prevention worked better than quinine.10 These studies helped local officials see the value of mosquito abatement—just like in Panama during canal construction.

In the American context this was an either/or proposition, because very little public money was available for antimalaria work. This differed from Italy, where officials bridged mosquito control with treatment plans. The central government there made malaria eradication a national campaign; federal programs paid for mass quinine treatment and major land reforms that transformed the Pontine Marshes outside of Rome into farmland. Huge irrigation channels drained water from wetlands and kept it moving, eliminating stagnant pools that had produced the country’s abundant and vicious malarial mosquitoes. By the time the Great War broke out in Europe, in early 1914, Italy had reduced its malaria problem—nearly eliminating malaria-related child mortality that had previously reached a staggering 60 percent in some areas.11

The United States’ central government learned from the Italians, and used their dual-edged strategy around military installations. Lumber companies and the railroads also applied a dual strategy to keep workers well enough to put in a long day. For these efforts, the U.S. Public Health Service partnered with Rockefeller officials. They “sanitized” mosquitoes from the woods and swamps surrounding these industries’ work sites, and around federal installations. Demand for screens created a new market for steel. And soon municipalities that could afford the antimosquito work hired Rockefeller-trained experts to run the projects.

So while foundation doctors ran surveys to count the sick—and measure the extent of the problem—Rockefeller recruited young men from university biology labs to be trained in the art of killing mosquitoes. These chemical-spilling brigades grew in size as Rockefeller’s programs expanded. Soon the foundation had created a selective vacuum that sucked in the nation’s most promising scientists.

Lowell Coggeshall would someday be among them.

*   *   *

LOWELL left high school in 1917 in a graduating class of two, of which he ranked second. While the smarter kid dived into farming, Lowell tried everything but. He worked as a railroad assistant, but got bored. Then he worked as a bank teller, but made too many mistakes. He eventually settled into a winter of trapping muskrats in subzero temperatures just over the state line in Ohio. Barely surviving there, he decided to enlist in the navy. The Great War raged in Europe, and Lowell wanted in on the action.

He made his way to the navy’s recruitment center and queued up with other boys like him, trying to be men. At six feet tall he felt himself sturdy enough to serve his country. But he still had a pubescent scrawniness. His skinny neck failed to fill a shirt collar. His clothes hung like a bag on bones. At just sixteen and not much more than a hundred pounds, he was told to go home and gain weight.12 He’d play no part in this war.

*   *   *

MALARIA drove many outcomes of the Great War, as witnessed by public health officers stationed in Europe. They recorded, for the first time in history, a stunningly rapid spread of malaria in battlefield conditions. Millions of soldiers dug in around local civilians in Macedonia seeded with their own strains of malaria. Then hordes of refugees streamed in from other war-torn countries, bringing in new strains. All were quarantined to camps not far from the fighting, where mosquitoes bred rapidly in the blown-apart landscapes. They rose up to feed on civilians trying to find shelter in barns and under trees, and delivered mouthfuls of malaria to soldiers trapped for months in the trenches. Mosquitoes carried the disease from one army to the next.

Medical corpsmen saw the math: Armies of exposed soldiers mixed with millions of anopheline mosquitoes and thousands of infected villagers equaled major outbreaks. Both Central Power and Allied commanders stopped fighting while whole regiments shook with chills and burned with fever. The British hospitalized 162,512 soldiers for malaria, compared to 23,762 killed, wounded, and missing in action. In one famous cable, a French commanding general reported his entire army too sick to fight—as 80 percent of his 120,000 men had malaria and were hospitalized.13 Battles commenced when mosquito season ended.

When the Allies finally won, soldiers marched home with the emotional scars of war—and malaria still in their blood. Public health officers predicted the outbreaks to come, which were the worst ever recorded. In Russia, for example, social and political chaos left people homeless and destitute. That, combined with perfect weather for mosquito breeding, sent the basic case reproduction rate—the BCRR—through the roof. The disease easily moved from one insect colony to the next, burning through villages of people, reaching farther and farther across the continent. Within three years, Central Asia hosted the world’s worst malaria outbreak on record. It hit six million people, claimed six hundred thousand lives, and spread as far north as Archangel near the Arctic Circle. Anyone who knew anything about malaria watched in stunned fascination. “For the first time, as far as we know, the King of Tropical Diseases set foot within the Arctic Circle,” uttered Rockefeller’s world-renowned malaria expert, Lewis W. Hackett.14

Meanwhile, public health officers in Italy saw their own nightmare unfold. For them, the clock had turned backward. War undid the country’s progressive land reforms and quinine-treatment campaigns. Irrigation networks broke down, refilling swamps and marshes. Factories stood still, with countless pots and puddles collecting water for mosquitoes to lay eggs in. Farm animals, an alternative blood source for Italy’s species of anopheline mosquitoes, were confiscated for the war and now gone from the hillsides. Millions of mosquitoes rose from the waters and voraciously fed on more than fifty thousand malaria-infected soldiers returning from war. Then the insects delivered malaria to the population at large. This opened an “epidemic highway” that, in 1918, triggered the worst malaria outbreaks in Italian history.15

*   *   *

AS nearly a million Europeans and Central Asians died as a consequence of postwar malaria, Lowell found a way out of farming: college. This was how his uncle Don on his mother’s side got out of it, and Uncle Don had done well for himself.

Don C. Warren had broken the mold of Saratoga by turning wedding gifts from his parents—$500 and a cow—into tuition at Indiana University’s biology department. He later went to Columbia University and became a specialist in a new field called genetics. Don applied his skills to chicken farming, made a bundle mass-producing high-quality chickens and eggs, and traveled to faraway places like California.16

The people of Saratoga disliked Uncle Don. His education, elaborate life, travel, and good pay made him stuck-up, if not lazy. Because he rejected the tight world of small-scale farming, he was a role model for no one—except Lowell. Lowell admired how his uncle used education to escape Saratoga’s confining standards. So he followed his uncle’s steps into Indiana University’s biology department.

At age seventeen, Lowell borrowed $100 from his father and left for Bloomington. He brought a small trunk of essentials, including bedding, one suit, and a bag to bring his dirty clothes home in every week. The English Gothic structures organized around IU’s stately clock tower felt otherworldly. As did the student body, which was seven times larger than the population of his hometown, and seemed filled with exceptionally smart people. Lowell believed himself to be among the most unsophisticated there, and he might have been. His English professor berated him for “atrocious” grammar and word choice. He couldn’t spell. And he couldn’t keep up with the rigors of his other courses. Discouraged and feeling outclassed, he played billiards and worked out in the gym. He joined a social fraternity, partied a lot, and followed IU’s championship basketball team around the region, by either hitching a ride or jumping the freight train. Once, a conductor dragged him and his buddies into the engine car and forced them to shovel coal. They didn’t mind the dirty, backbreaking work as much as getting caught. It meant they couldn’t jump the train to get home without risk of jail—which the conductor promised, if they got caught again. So they hitchhiked in freezing sleet.

That first year his dean put him on probation with a threat of expulsion.

If everyone has just one defining moment, this was Lowell’s. He could either work harder at academics or go home. That he threw himself into the former is a testament to his fear of the latter. He had to improve his grades or return to the farm and drag a plow. So he nailed himself to his desk, worked around the clock, almost never went home, and took classes through the summer. In doing so, he found he had a knack for science. He used books to bring context and content to his love of nature. The harder he tried, the better he did. He slowly proved to his professors that he could do the work, that he was a budding naturalist with a particular talent for entomology. No more wasting time. He found his calling, and he would take it seriously.

CHAPTER 2

The same year Lowell snapped from adolescence into a man of biology, a “brilliant but extremely difficult” psychiatrist by the name of Julius Wagner von Jauregg tried a memorable experiment that would soon transform the study of malaria.1

The experiment unfolded across the Atlantic Ocean, in the foothills of the Austrian Alps, while the Great War still raged. This difficult psychiatrist drew blood from a malaria-infected soldier just home from the Balkans and immediately injected the blood into the shoulder blade of another man. Jauregg’s stony face and fixed eyes remained unchanged as he pushed the entire contents into the bloodstream of the second man—a thirty-seven-year-old stage actor suffering from the shameful corkscrew bacteria of syphilis.2 The actor’s infection had reached his brain and turned him mad, so his family put him in Jauregg’s care. And Jauregg, by filling him with infected blood, hoped beyond all hope that this mental cripple—this babbling idiot prone to obscene acts and violent rants—would soon burn with fever. Then perhaps the madness would end.

The event took place back before Jauregg became the renowned grandfather of psychiatry. Before this tall, statuelike doctor became a Nobel laureate. And before he gave the world an excuse to use many thousands of syphilis patients in malaria experiments. This was back when he was just another man of medicine tied to psychiatry because he had failed to obtain the appointments necessary for internal medicine.3

The son of an Austro-Hungarian knight, Jauregg towered over his peers in personality and stature. All fell silent when he entered a room, “as if in the shadow of a titan.”4 His biographer traced this stern, authoritative persona to a once happy childhood gone bad after the untimely death of his mother from tuberculosis. When he was just ten, his two sisters were sent to convents and he and his brother moved with their father to Vienna, where they attended the city’s most renowned school for boys, with strict academic standards. Young Julius studied with sons of nobility and earned excellent grades in everything from Latin to the natural sciences, winning him entry into medical school, where he spent five years earning grades with distinction and studying under arrogant and undistinguished professors. He found his place in that sweet spot between intense competence and hard work, with an eye for brilliance in others. He resided where reason took precedence over sentiment, which made him somewhat caustic and short, but never overbearing and rarely wrong.

The famous scientist-turned-writer Paul de Kruif remembered Jauregg as a “piece of granite rock”; his biographer called him “gruff” but “generous,” with an “indestructible calmness”; assistants said he was “straight as a candle and as if chiseled in stone,” but kind and fatherly, with a “golden humor” and deep humanity.5 While his students called him a “wooden statue,” they attended his lectures in droves, and loved to see his eyes twinkle through that famously dour expression whenever he had cause to tell one of his many off-color jokes, which were always at the expense of Jews or degenerates.6

He wore his thick black hair neatly cropped and his handlebar mustache tightly curled and combed. He dressed conservatively in a pressed suit and crossed tie. High chiseled cheekbones framed a long, thin face that, with age, fell southward, creating a jowly, hound-dog effect. His striking appearance made a bold mark on the many medical meetings he attended, even among the famous, at large international conferences, which he called “church fairs.” He by far preferred small, productive gatherings of the Vienna Medical Society and other local medical organizations, where he often took charge of a discussion gone astray or gave sage insights during disputes.7 Everything about him showed off a man in control, but for the outsize mustache and bushy eyebrows, which hung like a shag rug over deep-set eyes.

In the thankless world of psychiatry he saw himself as a hybrid, a gentleman in the trenches—maybe like a pearl amid swine. He could have easily been like other psychiatrists: a man of medicine in title only, unable to practice his trade because scarcely any treatments improved mental illnesses. Psychiatry often meant life as an asylum superintendent, overseeing palliative care—a manager watching over the mentally afflicted until they finally passed away or hanged themselves. In the case of syphilis, death came soon after the bacteria entered the brain. No treatment could slow the progress, which made medical care perfunctory and meaningless.

But Jauregg, in his granite resolve, found this unacceptable. If psychiatry would be his profession, he would find a way to treat mental conditions, which he believed were caused by physiological functions gone awry—not emotional disturbances that could be cured by therapy.

He was among the first in his field to believe that infectious diseases caused, and could cure, different forms of mental illness. At night he went to the Vienna medical library to pore over international periodicals and study biological functions he suspected might be associated with cognitive failings. He conducted autopsies on the deceased to study their spinal fluid and nervous systems, looking for clues that might explain their conditions. He studied and wrote a paper on resuscitating patients who had attempted suicide by hanging, surmising that the convulsions and memory loss that followed probably stemmed from asphyxiation (not from emotional hysterics, as his peers believed). He also observed that the physical shock of being near death and then brought back to life appeared to have a positive effect on a patient’s mental state. He described one woman who had been melancholy and paranoid before she was cut down from the rope. After a few convulsions and some minor amnesia she was cured.8 To him, this was a physiological disturbance correcting another physiological disturbance. His patients weren’t crazy; they suffered from biological hiccups that led to mental catastrophes that were probably treatable, if enough study were dedicated to the effort.

For all these reasons, he took good care of his patients, and he even married one—a morphine addict.9 She was a degenerate by his definition, but still marriageable. This kind of hypocrisy seemed to make him whole. While he believed the insane should be sterilized so as not to procreate, he treated them well. While he told terrible jokes about Jews, he had many Jewish friends, assistants, and students. And while he made fun of the new psychoanalytical theories advanced by his Jewish friend Sigmund Freud, he respected and liked Freud and defended his theories, even as he disagreed with them. This may have been because Freud also showed uncommon respect for the mentally ill—something Jauregg valued above professional partisanship.

Only those who would disrespect patients felt his icy judgment.

One visitor to his clinic, according to his biographer, walked up to him as he towered over his ward wearing the usual white lab coat over a vest and cross-tie, and asked: “Excuse me, sir, where are the madmen?”

With his usual stone face, he pointed to the outside and said: “The madmen are out there,” then pointed to his ward and added, “In there are the sick men.”10

Their ailments humbled his sense of duty and challenged him to find novel treatments—including the use of malaria. He wasn’t just testing hunches on the insane because they were easy prey for medical experiments. He hoped to cure insanity. And he believed high fevers could do it.

This hypothesis was broadly shared. He and others had seen outbreaks of typhus, cholera, smallpox, and other fever-causing contagions burn through patients at their respective asylums, always leaving behind death and despair.11 But they also left some hope by turning one or two lunatics sane, sometimes for short periods, sometimes for good. Jauregg combined his own observations at the Asylum of Lower Austria in Vienna with those made by others at other asylums to write a “landmark” paper in 1887 that advanced the idea of inducing fevers as a possible cure for mental illness.12

In 1889, Jauregg tested this observed phenomenon by giving his patients streptococci erysipelas, which caused skin eruptions and high fevers. The results were unclear and he lacked adequate time to flesh out the possibilities. So he encouraged others to take up the cause. He lectured on it and implored colleagues to pitch in—hoping someone would find a reliable way to induce fevers. Then he grew discouraged when he learned that one psychiatrist had used a poisonous ointment on the skin of patients’ shaven heads, and left it there for weeks, until red-hot inflammation and oozing puss immunologically forced the onset of fevers. Jauregg openly objected to the method. Others implanted horsehairs under the skin to “provoke abscesses,” or applied mustard plasters or Spanish fly, which he called cruel.13Eventually he gave up on his fellow psychiatrists and drafted a new study design to try for himself.

But his next attempt didn’t set much of an example. He started it in 1890, when a research associate gave him a vial of tuberculin bacteria from Berlin.

Jauregg injected it into brain-damaged syphilis patients—a condition with many names but that was broadly referred to as general paresis, and the sufferers were called paretics. This type of neurosyphilis attacked the central nervous system and usually appeared ten to twenty years after exposure. Symptoms included decreased language and motor abilities, impaired judgment, hallucinations, delusions, violent mood swings, dementia, seizures, obscene behaviors, and muscle weakness that led to a telltale gait. Once these symptoms appeared, patients had two, maybe three years to live. Jauregg chose them as test subjects because of their dreaded condition. Why not use fevers to try to save them?

He infected sixty-nine so-called paretics with his vial of German TB and compared them to sixty-nine untreated paretics. Mental health improvements were observable in only those patients who reacted strongly—with particularly high fevers that raged for days. This suggested that intense fevers worked better than regular fevers.14

But Jauregg abandoned the project after he learned that patients he had sent home cured of syphilis and dementia later died of TB.15 Meanwhile, the ethical objections rolled in as “the entire Viennese press printed editorials caustically criticizing his work and holding him to be a potential murderer.”16 It didn’t matter that several of his colleagues repeated his experiments and produced the same promising results. If the scientific community saw his data as contaminated—because he mistreated patients—no one would publish his papers or advance his theories.

So he bided his time again. The TB work had narrowed the concept. He’d found his target group. Syphilitics became his cause, and he lectured cautiously about his discovery: “We cannot be reproached for using a procedure which is irrational. We have listened to nature; we have attempted to imitate the method by which nature itself produces cures.”17 He was sure of the concept, but the TB experiments instructed him to move forward gingerly, trying only dead bacillus and staphylococci, neither of which worked well because they failed to produce furnace-hot fevers for days on end.

For that, he theorized that malaria would work beautifully. And the infections could be easily controlled by quinine. His main problem was that Austria had no malaria and he didn’t know where to get it. Parasites that cause malaria couldn’t be grown or kept alive in petri dishes, so scientists couldn’t simply share vials of them.

He eventually moved to the Vienna General Hospital to run its Outpatient Department for Nervous Diseases—the first stop for patients with signs of late-stage syphilis, and usually a few short months before being committed to an asylum. There he continued testing his other theories, which included the use of iodine to prevent goiters and cretinism—work that led Austrian authorities to add iodine to salt.18He started treating battle-traumatized soldiers with electric shock therapy that involved strapping them down and sending a jolt of electricity through their brains—with the hope that it would snap them out of their emotional hysterics.

Jauregg was developing a mixed reputation. On the one hand, he showed flashes of brilliance in thinking through physiological aspects of mental conditions—as evidenced by his iodine work. Even his fever therapy, while dangerous, was innovative and thorough. But the shock treatment was different. Jauregg treated soldiers as he was expected to. It mattered little whether they saw horrifying deaths and bloody dismemberments in grenadelike flashbacks that shattered their sense of safety. They belonged to a war-focused culture that needed men to be strong for the war effort.19 Jauregg’s job was to straighten them out so they could return to the front lines and fight for the Fatherland. This mind-set was Austrian; it was nationalistic; and it put the country before the individual.20 It also foreshadowed Jauregg’s political leanings, which eventually led him to the Nazi party in its early years, before Hitler occupied Austria.21

One soldier sued Jauregg’s clinic because of his assistant’s actions. The suit brought Jauregg’s operation and his competence into question—an unbearable burden for a brilliant man who saw himself as cautious and fair-minded. At his trial, his old friend Sigmund Freud came to his defense.22 Freud testified that all medical psychiatrists had been pressured to act like “machine guns behind the front lines, driving back those who fled.” He argued that while Jauregg may have hurt soldiers, the torture was unintended; Jauregg had actually been trying to treat them. And with that, the charges were dropped.

But it left Jauregg shaken. He was no monster. He tried to do no harm.

*   *   *

AMID the controversy, one of his medical colleagues reported that a soldier just admitted for minor nerve damage also shivered with vivax malaria.23 The doctor asked: “Shall I give him quinine?”

To which Jauregg replied, “No!”

Finally, he had a source of malaria.

On June 14, 1917, he drew the soldier’s blood and immediately injected it into his demented patient, the former actor.24 When the actor came down with malaria, Jauregg extracted his blood and injected it into another eight patients also with late-stage syphilis. Then he watched over them as they sweated through days of extreme fevers and shook with bone-cold chills.

One patient died; two worsened and were admitted to the asylum; four regained cognitive function but later relapsed. Only two appeared cured of dementia. One was a thirty-four-year-old man whose cognitive failings had only just begun that month. After eleven attacks of fever, he fully recovered and, at his own request, returned to his army regiment. The other was a thirty-nine-year-old man, also in the early stages of dementia. He suffered through ten attacks of fever before regaining cognitive function. And soon after, he returned to his job as a cleric.25 As for the actor—the first to receive the transfusion of infected blood—he, too, improved; he gave performances for patients at the asylum and was discharged. But Jauregg later received a letter from a Frankfurt doctor reporting that the patient had relapsed and had to be admitted to an asylum there.

The two patients Jauregg fully cured, however, made history.

*   *   *

SYPHILITICS filled asylums in the late nineteenth and early twentieth centuries, occupying an estimated 10 to 20 percent of beds.26 The disease first appeared in medical records as an epidemic that burned through Europe beginning in the early 1500s. People back then called it a “pox,” because after ulcers appeared on the genitals—today’s classic symptom—the sores spread to cover other parts of the body. Symptoms also included nerve damage that caused excruciating pain.27 People feared it was a new type of leprosy sent by God for man’s sexual liberties; others believed Columbus brought it back from the New World. Today, scientists believe syphilis grew from a skin disease called yaws—caused by the same bacterium as syphilis, called Treponema pallidum. The mutation from skin disease to venereal disease occurred just as Europe’s population had exploded, when there were more houses of prostitution, more sexual freedoms, and more warfare—with mass movement of armies and refugees to spread diseases.28 Over the next two centuries, severe symptoms of syphilis moderated, became less leprosylike, and therefore less terrifying and prophylactic. It had morphed into an annoyance that people could live with, which advanced its spread.

But then another mutation occurred, one that allowed the bacteria to cross the blood-brain barrier and infect the frontal lobe. Scientists now believe this happened somewhere around 1800, because within the next few decades physicians noticed a tangible increase in dementia among relatively young people. This new type of psychosis caused a somewhat sudden personality change that brought on wild mood swings followed by profanities, inappropriate sexual behavior, and muscle paralysis. Puritans blamed it on masturbation and alcoholism, while scientists tried to figure out the real cause. Meanwhile, those infected brought shame and disgrace on their families and were sent away to lunatic asylums, where they became wards of the state and a financial burden.

Finally, in the early twentieth century, the cause had been identified as untreated syphilis. It had burned through the middle classes at infection rates as high as 20 percent in Europe.29 So prevalent was syphilis that it spawned its own medical specialty, called syphilology. According to historian Joel T. Braslow, some European asylums reported that 45 percent of their male patients were there because of neurosyphilis. Percentages were much lower in the United States but still ran around 10 percent of male hospital patients.30

*   *   *

FINDING something even partially effective against this brain-destroying sexually transmitted disease earned Jauregg accolades and helped repair his damaged reputation.

To continue, he sought permission from a military hospital to take blood from a soldier just home from the fighting and sick with malaria. Jauregg didn’t examine the blood before injecting it into four of his syphilis patients. To his horror, the four infections spiraled out of control. Despite large doses of emergency-level intravenous quinine, one patient’s red blood cell count dropped precipitously. After thirty-one days of Jauregg trying frantically to control the malaria, the patient died. By that point, Jauregg knew he had accidentally used deadly falciparum. Two others also died. Only one patient survived after forty-five days of heavy intravenous quinine. Weeks passed before Jauregg saw the silver lining: his sole survivor had been fully cured of insanity.31

By now, Jauregg had grown old. At age sixty his tall, stiff build began to hunch. His stern, stony manner—once considered serious and erudite, forceful and commanding—now came off as grim and difficult. His face appeared longer than ever, drawn down by that thick handlebar mustache and topped by wiry gray shoots coming off his shaggy eyebrows. But those eyes, those sunken, beady eyes, must have lit up at the reality that he had discovered a cure for syphilis. He was right about malaria—it could cure this type of dementia, and maybe other types as well. He may have even bested Salvarsan, the great Paul Ehrlich’s magic bullet, which really poisoned more patients than it helped.

So while the falciparum fiasco shook him, he continued working with malaria.

CHAPTER 3

On the other side of the Atlantic, in 1923, Lowell’s stellar grades earned him invitations into Phi Beta Kappa and Alpha Xi, a place at the top of his class, and acceptance into IU’s doctoral program in zoology. To cover tuition he taught several courses, including undergraduate ornithology. He walked students through the woods, pointing out birds he’d just seen in books that morning while preparing for class. Bird-watching was something Lowell informally did as a kid while avoiding daily chores. Now he had books to put names to species he’d observed his whole life. He took up taxonomy to catalog species for his professors.

Lowell spent his summers at IU’s biological station on Winona Lake—about 130 miles north of Bloomington.1 In exchange for room and board he mopped floors and cleaned windows—and ate fish from the lake, which he cooked outside in an open pit. On weekends he earned money ushering at a nearby tabernacle where firebrand evangelist Billy Sunday preached, and the Redpath Chautauqua ran dances and cultural events. With Prohibition in full force, meetinghouses and Bible thumping gave people something to do, and allowed Lowell to earn cash for school.

He took the job on Winona Lake not for love of limnology—the study of freshwater ecology—but because a limnology professor he liked offered him the position. In return, Lowell worked hard. And he did well. He figured out a clever way to count bluegill eggs in the lake’s shallows, feeding important data into his professor’s project—counting the lake’s bluegill population, the results of which were presented by his professor, Carl Eigenmann (dean of IU’s zoology department) at the Ecological Society of America’s December 1923 meeting in Cincinnati.2 When an international limnology expert, Edward A. Birge, needed a graduate student to do similar work the following summer, Lowell’s IU professor recommended him.

That’s how Lowell ended up working for the somewhat famous Professor Birge—University of Wisconsin’s president and a nationally recognized defender of evolution in an ongoing, publicized debate with creationist presidential candidate (and later secretary of state) William Jennings Bryan. Birge’s helmet of white hair and gray walrus mustache gave the only clues that he’d reached old age. Otherwise, the tall, athletic water expert remained intellectually and physically nimble. Lowell couldn’t believe his good luck when this great man invited him to Wisconsin for the summer of 1923.

In brisk morning fog, Birge and Lowell rowed to targeted sections of Green Lake and dropped anchor. Then the professor attached lead weights to Lowell’s waist and ankles, plugged an air hose to Lowell’s bulky diver’s helmet, and dropped him over the side. As Lowell’s wiry limbs disappeared in the frigid water, Birge stayed on top operating the air pump. He’d reel Lowell up after a full five minutes, an impressive duration for anyone, and especially for a kid with no body fat. But Lowell didn’t complain. He simply came back to the surface—all bony and determined—warmed his goose bumps in the morning sun, and then plunged back into the dark waters. In the afternoons he helped Professor Birge and another limnology pioneer, Chancey Juday, sort through the specimens. All summer, Lowell lived in a tent on the lake’s shore, learning the language and methods of world-class scientific exploration—and, no doubt, tricks for reconciling Darwinism with Billy Sunday’s preaching. Most important, he earned a reputation as a hard worker with a good moral compass.

*   *   *

 

Hỏi đáp
Nhận xét của khách hàng