MODULUS
Published in

MODULUS

The contested story of how the U.S. eradicated malaria

By Stephanie Sammann and Erica Corder

An up-close view of blood cells, with a transparent yellow overlay.
Electron micrograph of red blood cells infected with Plasmodium falciparum, the parasite that causes malaria in humans. (Image: Rick Fairhurst and Jordan Zuspann, National Institute of Allergy and Infectious Diseases, U.S. National Institutes of Health / Illustration: MODULUS)

We don’t hear much about malaria in Western media, unless we’re talking about how it’s impacting tropical or humid regions elsewhere. But places like the U.S. and Europe used to be riddled with the mosquito-borne disease.

In the U.S., in particular, it debilitated towns, caused thousands of deaths, and even determined the settlement patterns of the country. But now, it has been so thoroughly erased that many people don’t even know it used to exist here at all.

As of 2017, 87 countries had ongoing malaria transmission, a majority of which are in Southeast Asia, the Eastern Mediterranean, parts of Central and South America, and Africa, which carries most of the global malaria burden.

So how did it disappear in the U.S. — and continue to thrive elsewhere?

It starts with a parasite

To understand the spread of malaria in the U.S., it’s important to consider the life cycle and behavior of the Plasmodium parasite that causes malaria and its vector, the mosquito.

A mosquito sitting on skin.
A female Anopheles albimanus mosquito. (Image: Centers for Disease Control and Prevention’s Public Health Image Library)

Various species of the Plasmodium parasite that develop into malaria are believed to have originated in apes and were transmitted to humans via the Anopheles mosquito.

When an infected female Anopheles mosquito bites a person, Plasmodium parasites are injected into the bloodstream as sporozoites, or the type of cell that introduces an infection to a host. The human host shows no symptoms for seven to 10 days.

Those sporozoites then undergo many rounds of division and multiplication in the human liver cells. A single infected liver cell can lead to the creation of thousands of new parasites.

A graphic illustrating the life cycle of the parasite.
The lifecycle of malaria. (Illustration: U.S. Centers for Disease Control and Prevention)

These new parasites then migrate to infect red blood cells, where they can hide from the body’s immune system. Here, they consume the contents of the red blood cell, and divide to create even more parasites.

Eventually the red blood cell they are inhabiting ruptures, and the new parasites, called merozoites, are released. These continue the cycle by invading other red blood cells, which subsequently also rupture.

Parasites living in the bloodstream cause the symptoms of malaria, which can range from headaches and fever to seizures and death, if the parasites block arteries in the brain and kidneys.

The microscopic malaria parasites are then ingested during a blood feeding by an Anopheles mosquito as gametocytes, another blood stage of the parasite.

Inside the gut of the mosquito, the parasites mate and begin a cycle of growth and multiplication. A week or two later, the parasite migrates to the mosquito’s salivary glands. When the mosquito then feeds on another human, anticoagulant saliva is injected together with sporozoites into a human — once again starting a new cycle and spreading malaria.

The unharmed mosquito serves as a vector from human to human, while infected humans introduce the parasite to more mosquitos.

Malaria alters the colonization of the U.S.

Infections from the Plasmodium parasite have represented a threat to human lives at least for several millennia, if not even longer.

Historians theorize that the colonizers were the first to bring the disease to the U.S., but that malaria didn’t gain traction in the country until the 17th century, with the arrival of slave ships and more virulent strains of malaria from Africa.

A moquito encased in amber.
This preserved mosquito carried a type of Plasmodium parasite that could infect birds. Found in amber from the Dominican Republic, this discovery lends proof to the idea that malaria was established in the Americas at least 15 million years ago. (Image: Oregon State University / Flickr)

Once the parasite was introduced to North American shores, places like the wet, low-lying plains of Virginia and South Carolina were overrun with the disease.

Though the Carolinas were initially thought of as a land of paradise by European settlers, malaria’s prevalence quickly changed that. An English proverb at the time said “Those who want to die quickly, go to Carolina.”

Incoming immigrants thus labeled certain colonies as healthy and others as dangerous.

The Caribbean was understood to be the most dangerous, with Florida and the Carolinas being a close second. The Chesapeake was somewhat better, but European settlers fared the best in the northern colonies of New York, New England, and Pennsylvania.

Because most enslaved African people had some tolerance to malaria, and indentured laborers from Europe did not, demand for slave labor in malarious areas also increased.

Along with other prevalent diseases like yellow fever, malaria influenced the labor patterns and the settlement of the colonies — both of which would indirectly lead to the Civil War.

Chaos as malaria sweeps the nation

Malaria wreaked havoc in America for centuries. By World War I, malaria was a huge problem, especially for the military.

A chart of admissions per thousand men per year for the army in the continental U.S.
From 1917 to 1919, training soldiers who picked up malaria were admitted in large numbers to infirmaries. (Illustration: Real Science)

Men training in the south were picking up the disease in rapid numbers. A total of 10,510 admissions for malaria were reported among the Zone of Interior troops from April 1917 through December 1919, involving a loss of 130,673 training days.

A map of the U.S. with the midwest and South highlighted.
In 1882, malaria was concentrated in the midwestern area of the United States. By 1934, it was primarily found in the South. (Illustration: Real Science)

By the 1930s, malaria was concentrated in 13 of the country’s southeastern states, with well over a million cases across the country. The U.S. was descending into the Great Depression, and poverty skyrocketed while fewer and fewer had access to adequate healthcare and nutrition. These conditions allowed malaria to thrive, and by 1933, Malaria deaths in America reached a new peak.

Wartime posters advocating for preventative measures against malaria, circa the early 1940s. (Images: National Archives and Records Administration)

But for the next decade, where malaria should have boomed, it instead retreated, until the disease ultimately disappeared. But the exact reason why still isn’t clear. To this day, scientists and historians fiercely argue about which of the many factors was the key to its eradication.

There were two camps of opinion on the best way to get rid of malaria: attack the parasite inside the human body, or eliminate the vector for the disease — the mosquito.

The medicinal approach

If it were possible to treat all members of a community at once and eradicate the parasite within them, then malaria could be wiped out, even if the adult mosquitoes continue feeding.

For hundreds of years, people were aware that quinine, obtained from the bark of the cinchona tree, could be used to alleviate the symptoms of and even prevent malaria.

So in 1916, scientists carried out a study on 500,000 people in malaria-plagued Mississippi to see if quinine could be used as an effective treatment.

A poster that reads “Quinine kills malaria germs. Daily doses during the malaria season help keep malaria away.”
U.S. Public Health Service poster advertising quinine. (Image: Library of Congress)

They gave out doses of quinine for free, and indeed found that it reduced malaria infections by 90 percent. However, even though high doses of quinine were good for quickly ending an episode of fever and chills, they found that people would not take enough quinine on a regular basis to prevent infection long-term because of its side effects.

Therefore, it was useful for interrupting an infection and relieving symptoms, but more often than not, the infection would come right back. Distributing quinine in large numbers clearly helped, but it would not be the complete answer to getting rid of malaria.

In 1934, another antimalarial compound was discovered: chloroquine. Chloroquine is more effective than quinine, and it was introduced into widespread clinical practice in 1947.

It is not fully understood how chloroquine kills the parasites, but it is thought to work by blocking the action of a chemical that the parasites produce to protect themselves against the iron-containing heme molecule inside the red blood cells.

This most certainly played a role in malaria’s demise in America, but its widespread use only came a decade after the initial decline in the disease — meaning it’s only one piece of the puzzle. It would take more than medicating the population to get rid of this persistent disease.

Killing the vectors

Once it became known that the mosquito was the cause of malaria, many people believed its death would be the way to eliminate the disease.

But killing the adult stage of the mosquito is challenging and, at the time, there weren’t any reliable methods for doing so. Many believed it was best to attack the mosquito at the most vulnerable point in its life cycle, the larval stage.

Mosquitoes lay their eggs in marshy, stagnant water, which then hatch into larvae, which will eventually develop into full grown, malaria-spreading mosquitoes. One way to then hit them where it hurts is to eliminate their breeding grounds — the marshy, stagnant water.

Unluckily for the mosquitoes, the 1930s brought a wave of public works projects intended to boost the economy, including malaria control. The Works Progress Administration, part of the New Deal, put people to work digging 32,000 miles of ditches and draining 623,000 watered acres.

An old photograph depicting several men working in a muddy pit filled with water.
Workers in the southern United States dig a drainage ditch to disperse standing water and reduce breeding grounds for Anopheles mosquitoes, circa 1920s. (Image: Centers for Disease Control and Prevention’s Public Health Image Library)

When draining was not possible, coating the surfaces of ponds with oil and spraying their habitat with a compound called Paris Green further smothered the larvae. These efforts coincided with a sharp decline in malaria transmission in the 1930s.

But by 1940, while less common than it once was, malaria still persisted. At this point, many of the Malaria Control Drainage Projects were discontinued.

A photograph of workers as they’re being painted by an artist.
Michigan artist Alfred Castagne (right) sketches Work Projects Administration construction workers on May 19, 1939. (Image: National Archives, Records of the Work Projects Administration)

However, the U.S. Army still trained its soldiers in many areas of the southeast U.S. where malaria still had its grip. Not wanting to repeat the hard lessons from the first world war, large-scale antimalaria operations were undertaken from 1941 to 1946.

The effort to eradicate malaria was now a national defense priority. Between 1941 and 1944, 40,000 acres of surface water was eliminated, 4.7 million gallons of diesel oil larvicide was used, and $9.8 million was spent on this all-out war against malaria (that’s $141,499,045 in today’s money).

Then, in 1944, one of the most effective mosquito killers of all time was invented: DDT, a compound that was effective either as a larvicide or insecticide. It could be dispersed by manually-operated sprayers, from fogging equipment, or from airplanes.

Thanks in part to DDT, by 1945, malaria transmission in the U.S. had dropped significantly, and the disease’s days in the U.S. were numbered.

A plane flying over a field as dust trails behind.
A dusting plane conducts a test using Paris Green to determine the drift of the dust and the quantity of the dust to settle to the ground. (Image: U.S. Department of Agriculture, National Agricultural Library)

Then, in 1946, the precursor to today’s Centers for Disease Control and Prevention was born. Named the Communicable Disease Center, this early iteration of the CDC had the primary mission of finally getting rid of malaria in America once and for all.

During the CDC’s first few years, more than 6.5 million homes were sprayed with DDT. It was applied to the interior surfaces of rural homes and entire premises in counties where malaria was still common.

Two researchers look at a microscope in a labratory.
Aimee Wilcox and laboratory director Seward Miller at the Communicable Disease Center’s laboratory at 291 Peachtree Street, in Atlanta, Georgia. Photo published in 1945. (Image: David J. Sencer, CDC Museum at the U. S. Centers for Disease Control and Prevention and the The Global Health Chronicles)

This, along with even more wetland drainage, pushed the disease out of existence in the U.S.

In 1947, 15,000 malaria cases were reported. By 1950, only 2,000 cases were reported, And by 1951, malaria was considered eliminated altogether from the country.

Translating the success to the rest of the world

The onslaught of DDT, drainage works, habitat oiling, and preventative medication had finally worked. However, there are dozens of other factors which also contributed to malaria’s death in America.

Some historians firmly believe that the key factor in its eradication was actually population movement away from rural areas, while others think it was simply better education about the disease that did the trick.

A group of students look to a teacher and poster at the front of the classroom.
Students learn about anatomy and hygiene in a Gees Bend, Alabama school, 1939. (Image: Marion Post Wolcott, Library of Congress Prints and Photographs Division)

Some theories assert it was general economic improvement and the installation of screens on houses, or that it was actually a massive drought that led to malaria’s demise.

This multivariable attack on the disease has left scientists unsure of how to best translate these results to other parts of the world today.

This, along with the fact that many of the approaches taken in America can not or should not be undertaken in other parts of the world, has made eliminating malaria globally a massive, still-unsolved problem.

Most can agree that draining marshy areas in the U.S. helped lead to malaria’s decline in some amount. However, in Africa, where most of the world’s cases of malaria occur today, such methods are not feasible because the mosquitoes there breed in small pools of water that form from rainfall, spread across the landscape. It is difficult, if not impossible, to predict when and where the breeding sites will form, and to find and treat them before the adult mosquitoes emerge.

DDT, too, was instrumental in eliminating malaria in America, since it is so effective at killing mosquitoes. Unfortunately, the insecticide is also potentially harmful to humans. The U.S. Centers for Disease Control and Prevention labeled it a possible human carcinogen.

It was banned in the U.S. in 1972 and most corners of the world in the 1970s and 80s. Although it is still being used in some places, scientists urge it to be a last resort to combat malaria.

Distributing anti-malaria medicine also would certainly help if everyone vulnerable to the disease had plentiful access to it. But the cost of this can be prohibitive, and while some progress can be made with this method, differences in regional geography and climate of places like Africa ensure that gains made in one area may be lost as the disease returns from a surrounding area.

Because of these problems and new incidents of drug and insecticide resistance, new approaches must be taken in this global battle. Organizations like the Bill & Melinda Gates Foundation are working towards just that, putting their resources in large part towards better data collection along with research and development for creating new medicines and vaccines.

Recently, scientists have been experimenting with genetically modified mosquitoes, intended to drastically reduce mosquito populations in the wild. It’s a drastic new development in the battle against mosquito borne illness, and much remains to be seen with the promising new technology.

No single strategy to combat malaria will ever be effective everywhere, and unfortunately there is no silver bullet for this centuries-old problem. But with long-term commitment and a flexible strategy, along with much more funding, we may one day see the same success the U.S. found in regions across the globe.

--

--

--

MODULUS is an online publication for curious minds seeking behind-the-scenes insight about the engineering, science, and technology of our time.

Recommended from Medium

“CRISPR” Research Papers, August 2021, Week 4 — summary from OSTI GOV, Wiley Online Library and…

What if we are the aliens?

“Cancer Cell” Science-Research, April 2022, Week 2 — summary from BioProject, DOAJ, Springer…

SARS-CoV-2 Can Alter the Human Genome: Evaluating the Controversial Pre-print

The Coevolution of Humans and Cannabis

Turfgrass Scouting Report: July 14, 2019

“Bone Cells” Science-Research, April 2022, Week 1 — summary from Europe PMC, MedlinePlus Genetics…

Ten Groups Exposed to Radiation

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
MODULUS

MODULUS

To measure is to know.

More from Medium

The Internet is Easy, and Other Boldface Lies

Getting Into Quantum Computing Today: Sara A. Metwalli — Keico University

Who is Vincent Bithell, and What Roles did he Play in Different Companies?