It’s been a year since the total solar eclipse of August 21, 2017, captured millions of imaginations as the moon briefly blotted out the sun and cast a shadow that crisscrossed the United States from Oregon to South Carolina.
“It was an epic event by all measures,” NASA astrophysicist Madhulika Guhathakurta told a meeting of the American Geophysical Union in New Orleans in December. One survey reports that 88 percent of adults in the United States — some 216 million people — viewed the eclipse either directly or electronically. Among those were scientists and citizen scientists who turned their telescopes skyward to tackle some big scientific mysteries, solar and otherwise. Last year, Science News dove deep into the questions scientists hoped to answer using the eclipse. One year out, what have we learned?
The eclipse sent ripples through Earth’s atmosphere. Normally, the sun’s radiation splits electrons from atoms in the atmosphere, forming a charged layer called the ionosphere, which stretches from 75 to 1,000 kilometers up. But when sunlight briefly disappears during an eclipse, the electrons rejoin their atoms, creating a disturbance in the ionosphere that is detectable with receivers on the ground (SN Online: 8/13/17).
The moon’s supersonic shadow produced a bow wave of atoms piling up in the ionosphere, similar to the wave at the prow of a boat, Shun-Rong Zhang of MIT’s Haystack Observatory in Westford, Mass., reported in December. Although such bow waves were predicted in the 1960s, this was the first time they were definitively observed. The eclipse also sent a wave traveling through the thermosphere, an uncharged layer of the atmosphere about 250 kilometers high, that was observed from as far away as Brazil nearly an hour after the eclipse ended (SN: 5/26/18, p. 14). And measurements of temperature, wind speed and sunlight intensity showed that the eclipse briefly changed the weather along the path of darkness.
Showing Einstein was right is not so simple. Physicists chased the moon’s shadow to redo the iconic experiment that showed Einstein’s theory of general relativity was correct (SN Online: 8/15/17). In Einstein’s view, the sun’s mass should warp spacetime enough that the positions of stars should appear to be slightly different during an eclipse. During the May 1919 solar eclipse, British astronomer Arthur Stanley Eddington took photographs that proved Einstein right.
During the 2017 eclipse, almost a century later, amateur astronomer Donald Bruns of San Diego made similar measurements with modern equipment and came to the same conclusion as Eddington: Stars visible during the eclipse were all askew. Bruns published his results in Classical and Quantum Gravity in March.
But astrophysicist Bradley Schaefer of Louisiana State University in Baton Rouge and others had far more difficulty reproducing the measurement with enough precision to show that Einstein was right. “‘Bummer’ is an understatement,” Schaefer says. “This all may have been for naught.”
Schaefer had enough trouble that he thinks it may have been impossible for Eddington to get the precision he claimed. The earlier astronomer may have hit upon the right answer by luck, not because he actually measured it.
Infrared light will help measure the corona’s magnetic field. Some eclipse experiments didn’t revolutionize our understanding of the sun on their own, but will enable future ones to pull back the veil. One of these was the first infrared observations of the sun’s corona, the shimmering halo of hot, diffuse plasma that is only visible in its entirety during a total solar eclipse. The shape and motion of all that plasma are guided by magnetic fields, but the corona’s magnetic field is so weak that it has never been measured directly (SN Online: 8/16/17). Previous studies suggested that infrared wavelengths of light might be particularly sensitive to the corona’s magnetic field. So two groups chased the August 2017 eclipse in airplanes to get some infrared observations. Amir Caspi of the Southwest Research Institute in Boulder, Colo., and his colleagues took the first infrared image of the entire corona. Flying in another aircraft, Jenna Samra of Harvard University measured the corona in five specific wavelengths, one of which had never been seen before. Comparing those results with observations taken from the ground in Casper, Wyo., (where I watched the eclipse) showed that those wavelengths are bright enough that a telescope now under construction in Hawaii will be able to help map the corona’s magnetism (SN Online: 5/29/18).
Figuring out what heats the corona will take more work. Almost every experiment aimed at the eclipsed sun last August had some bearing on the biggest solar mystery of all: Why is the corona so hot? The solar surface simmers at around 5500° Celsius, but the corona — despite being farther away from the solar furnace and made of much more diffuse material — rages at millions of degrees.
One year after the Great American Eclipse, scientists are still scratching their heads. Caspi’s team searched for waves rippling through the corona, which could distribute energy far from the solar surface. Those waves could also help comb out magnetic tangles in the corona and explain its well-ordered look (SN Online: 8/17/17).
In a complementary measurement, the group in Wyoming saw signs of neutral helium atoms in the corona, says solar physicist Philip Judge of the National Center for Atmospheric Research in Boulder. Those uncharged atoms probably represent cool material embedded in the corona (SN Online: 6/16/17).
Similar cool spots have been seen during earlier eclipses, although it’s hard to imagine how the cool atoms survive in the searing heat, like ice cubes remaining solid in hot soup. But collisions between charged ions and neutral atoms could help convert ordered motions, like Caspi’s waves, into coronal heat.
The results so far are interesting, but inconclusive, Caspi says. “It’s certainly possible we will get some very interesting results from this set of observations alone,” he says. But for such a big problem as coronal heating, eclipse observations may play a supporting role to more direct measurements, such as those that the recently launched Parker Solar Probe will make (SN Online: 8/12/18).
People are already looking to the next eclipse. A survey done by researchers at the University of Michigan found that eclipse watchers sought more information about eclipses and the scientific questions involved an average of 16 times in the three months following the event.
Several research groups are planning observations for the next total eclipses, visible in South America in July 2019 and December 2020 (SN: 8/5/17, p. 32). Caspi and Samra’s teams both hope to fly through those eclipses in aircraft again.
And amateurs and pros alike are preparing for the Great American Eclipse version 2.0, which will cross from Texas to Maine in 2024.
Particle accelerator technology has crested a new wave.
For the first time, scientists have shown that electrons can gain energy by surfing waves kicked up by protons shot through plasma. In the future, the technique might help produce electron beams at higher energies than currently possible, in order to investigate the inner workings of subatomic particles.
Standard particle accelerators rely on radiofrequency cavities, metallic chambers that create oscillating electromagnetic fields to push particles along. With the plasma wave demonstration, “we’re trying to develop a new kind of accelerator technology,” says physicist Allen Caldwell of the Max Planck Institute for Physics in Munich. Caldwell is a spokesperson of the AWAKE collaboration, which reported the results August 29 in Nature. In an experiment at the particle physics lab CERN in Geneva, the researchers sent beams of high-energy protons through a plasma, a state of matter in which electrons and positively charged atoms called ions comingle. The protons set the plasma’s electrons jiggling, creating waves that accelerated additional electrons injected into the plasma. In the study, the injected electrons reached energies of up to 2 billion electron volts over a distance of 10 meters.
“It’s a beautiful result and an important first step,” says Mark Hogan, a physicist at SLAC National Accelerator Laboratory in Menlo Park, Calif., who studies plasma wave accelerators.
Previously, scientists have demonstrated the potential of plasma accelerators by speeding up electrons using waves set off by a laser or by another beam of electrons, instead of protons (SN: 5/8/10, p. 28). But proton beams can carry more energy than laser or electron beams, so electrons accelerated by protons’ plasma waves may be able to reach higher energies in a single burst of acceleration. The new result, however, doesn’t yet match the energies produced in previous plasma accelerators. Instead, the study is just a first step, a proof of principle that shows that proton beams can be used in plasma wave accelerators.
High-energy electrons are particularly useful for particle physics because they are elementary particles — they have no smaller constituents. Protons, on the other hand, are made up of a sea of quarks, resulting in messier collisions. And because each quark carries a small part of the proton’s total energy, only a fraction of that energy goes into a collision. Electrons, however, put all their oomph into each smashup.
But electrons are hard to accelerate directly: If put in an accelerator ring, they rapidly bleed off energy as they circle, unlike protons. So AWAKE starts with accelerated protons, using them to get electrons up to speed.
Prior to the experiment, there was skepticism over whether the plasma could be controlled well enough for an effort like AWAKE to work, says physicist Wim Leemans of Lawrence Berkeley National Laboratory in California, who works on laser plasma accelerators. “This is very rewarding to see that, yes, the plasma technology has advanced.”
We now have the most precise estimates for the strength of gravity yet.
Two experiments measuring the tiny gravitational attraction between objects in a lab have measured Newton’s gravitational constant, or Big G, with an uncertainty of only about 0.00116 percent. Until now, the smallest margin of uncertainty for any G measurement has been 0.00137 percent.
The new set of G values, reported in the Aug. 30 Nature, is not the final word on G. The two values disagree slightly, and they don’t explain why previous G-measuring experiments have produced such a wide spread of estimates (SN Online: 4/30/15). Still, researchers may be able to use the new values, along with other estimates of G, to discover why measurements for this key fundamental constant are so finicky — and perhaps pin down the strength of gravity once and for all. The exact value of G, which relates mass and distance to the force of gravity in Newton’s law of universal gravitation, has eluded scientists for centuries. That’s because the gravitational attraction between a pair of objects in a lab experiment is extremely small and susceptible to the gravitational influence of other nearby objects, often leaving researchers with high uncertainty about their measurements. The current accepted value for G, based on measurements from the last 40 years, is 6.67408 × 10−11 meters cubed per kilogram per square second. That figure is saddled with an uncertainty of 0.0047 percent, making it thousands of times more imprecise than other fundamental constants — unchanging, universal values such as the charge of an electron or the speed of light (SN: 11/12/16, p. 24). The cloud of uncertainty surrounding G limits how well researchers can determine the masses of celestial objects and the values of other constants that are based on G (SN: 4/23/11, p. 28). Physicist Shan-Qing Yang of Huazhong University of Science and Technology in Wuhan, China, and colleagues measured G using two instruments called torsion pendulums. Each device contains a metal-coated silica plate suspended by a thin wire and surrounded by steel spheres. The gravitational attraction between the plate and the spheres causes the plate to rotate on the wire toward the spheres.
But the two torsion pendulums had slightly differently setups to accommodate two ways of measuring G. With one torsion pendulum, the researchers measured G by monitoring the twist of the wire as the plate angled itself toward the spheres. The other torsion pendulum was rigged so that the metal plate dangled from a turntable, which spun to prevent the wire from twisting. With that torsion pendulum, the researchers measured G by tracking the turntable’s rotation.
To make their measurements as precise as possible, the researchers corrected for a long list of tiny disturbances, from slight variations in the density of materials used to make the torsion pendulums to seismic vibrations from earthquakes across the globe. “It’s amazing how much work went into this,” says Stephan Schlamminger, a physicist at the National Institute of Standards and Technology in Gaithersburg, Md., whose commentary on the study appears in the same issue of Nature. Conducting such a painstaking set of experiments “is like a piece of art.”
These torsion pendulum experiments yielded G values of 6.674184 × 10−11 and 6.674484 × 10−11 meters cubed per kilogram per square second, both with an uncertainty of about 0.00116 percent.
This record precision is “a fantastic accomplishment,” says Clive Speake, a physicist at the University of Birmingham in England not involved in the work, but the true value of G “is still a mystery.” Repeating these and other past experiments to identify previously unknown sources of uncertainty, or designing new G–measuring techniques, may help reveal why estimates for this key fundamental constant continue to disagree, he says.
Now that heart recipients can realistically look forward to leaving the hospital and taking up a semblance of normal life, the question arises, what kind of semblance, and for how long? South Africa’s Dr. Christiaan Barnard, performer of the first heart transplant, has a sobering view…. “A transplanted heart will last only five years — if we’re lucky.” — Science News, September 14, 1968
Update Barnard didn’t need to be so disheartening. Advances in drugs that suppress the immune system and keep blood pressure down have helped to pump up life expectancy after a heart transplant. Now, more than half of patients who receive a donated ticker are alive 10 years later. A 2015 study found 21 percent of recipients still alive 20 years post-transplant. In 2017, nearly 7,000 people across 46 countries got a new heart, according to the Global Observatory on Donation and Transplantation.
New images of gas churning inside an ancient starburst galaxy help explain why this galactic firecracker underwent such frenzied star formation.
Using the Atacama Large Millimeter/submillimeter Array, or ALMA, researchers have taken the most detailed views of the disk of star-forming gas that permeated the galaxy COSMOS-AzTEC-1, which dates back to when the universe was less than 2 billion years old. The telescope observations, reported online August 29 in Nature, reveal an enormous reservoir of molecular gas that was highly susceptible to collapsing and forging new stars. COSMOS-AzTEC-1 and its starburst contemporaries have long puzzled astronomers, because these galaxies cranked out new stars about 1,000 times as fast as the Milky Way does. According to standard theories of cosmology, galaxies shouldn’t have grown up fast enough to be such prolific star-formers so soon after the Big Bang.
Inside a normal galaxy, the outward pressure of radiation from stars helps counteract the inward pull of gas’s gravity, which pumps the brakes on star formation. But in COSMOS-AzTEC-1, the gas’s gravity was so intense that it overpowered the feeble radiation pressure from stars, leading to runaway star formation. The new ALMA pictures unveil two especially large clouds of collapsing gas in the disk, which were major hubs of star formation. “It’s like a giant fuel depot that built up right after the Big Bang … and we’re catching it right in the process of the whole thing lighting up,” says study coauthor Min Yun, an astronomer at the University of Massachusetts Amherst.
Yun and colleagues still don’t know how COSMOS-AzTEC-1 stocked up such a massive supply of star-forming material. But future observations of the galaxy and its ilk using ALMA or the James Webb Space Telescope, set to launch in 2021, may help clarify the origins of these ancient cosmic monsters (SN Online: 6/11/14).
One fine Hawaiian day in 2015, Geoff Zahn and Anthony Amend set off on an eight-hour hike. They climbed a jungle mountain on the island of Oahu, swatting mosquitoes and skirting wallows of wild pigs. The two headed to the site where a patch of critically endangered Phyllostegia kaalaensis had been planted a few months earlier. What they found was dispiriting.
“All the plants were gone,” recalls Zahn, then a postdoctoral fellow at the University of Hawaii at Manoa. The two ecologists found only the red flags placed at the site of each planting, plus a few dead stalks. “It was just like a graveyard,” Zahn says.
The plants, members of the mint family but without the menthol aroma, had most likely died of powdery mildew caused by Neoerysiphe galeopsidis. Today the white-flowered plants, native to Oahu, survive only in two government-managed greenhouses on the island. Why P. kaalaensis is nearly extinct is unclear, though both habitat loss and powdery mildew are potential explanations. The fuzzy fungal disease attacks the plants in greenhouses, and the researchers presume it has killed all the plants they’ve attempted to reintroduce to the wild.
Zahn had never encountered extinction (or near to it) so directly before. He returned home overwhelmed and determined to help the little mint. Just like humans and other animals, plants have their own microbiomes, the bacteria, fungi and other microorganisms living on and in the plants. Some, like the mildew, attack; others are beneficial. A single leaf hosts millions of microbes, sometimes hundreds of different types. The ones living within the plant’s tissues are called endophytes. Plants acquire many of these microbes from the soil and air; some are passed from generation to generation through seeds.
The friendly microbes assist with growth and photosynthesis or help plants survive in the face of drought and other stressors. Some protect plants from disease or from plant-munching animals. Scientists like Zahn are investigating how these supportive communities might help endangered plants in the wild, like the mint on the mountain, or improve output of crops ranging from breadbasket wheat to tropical cacao.
Beyond the garden store Certain microbial plant partners are well-known, and there are scores of microbial products already on the market. Gardeners, for instance, can spike their watering pails with microbes to encourage flowering and boost plant immunity. But “we know very little about how the products out there actually do work,” says Jeff Dangl, a geneticist at the University of North Carolina at Chapel Hill. “None of those garden supply store products have proven useful at large scale.”
Big farms can use microbial treatments. The main one applied broadly in large-scale agriculture helps roots collect nitrogen, Dangl says, which plants use to produce chlorophyll for photosynthesis.
Farmers may soon have many more microbial helpers to choose from. Scientists studying plant microbiomes have described numerous unfamiliar plant partners in recent decades. Those researchers say they’ve only scratched the surface of possibilities. Many start-up companies are researching and releasing novel microbial treatments. “The last five years have seen an explosion in this,” says Dangl, who cofounded AgBiome, which soon plans to market a bacterial treatment that combats fungal diseases. Agricultural giants like Bayer AG, which recently bought Monsanto, are also investing hundreds of millions of dollars in potential microbial treatments for plants.
The hope is that microbes can provide the next great revolution in agriculture — a revolution that’s sorely needed. With the human population predicted to skyrocket from today’s 7.6 billion to nearly 10 billion by 2050, our need for plant-based food, fibers and animal feed is expected to double.
“We’re going to need to increase yield,” says Posy Busby, an ecologist at Oregon State University in Corvallis. “If we can manage and manipulate microbiomes … this could potentially represent an untapped area for increasing plant yield in agricultural settings.” Meanwhile, scientists like Zahn are eyeing the microbiome to save endangered plants.
But before microbiome-based farming and conservation can truly take off, many questions need answers. Several revolve around the complex interactions between plants, their diverse microbial denizens and the environments they live in. One concern is that the microbes that help some plants might, under certain conditions, harm others elsewhere, warns microbiologist Luis Mejía of the Institute of Scientific Research and High Technology Services in Panama City.
Save the chocolate Cacao crops — and thus humankind’s precious M&M’s supply — are under constant threat from undesirable fungi, such as Phytophthora palmivora, which causes black pod rot. But there are good guys in cacao’s microbiome too, particularly the fungus Colletotrichum tropicale, which seems to protect the trees. Natalie Christian, as a graduate student at Indiana University Bloomington, traveled to the Smithsonian Tropical Research Institute on Panama’s Barro Colorado Island in 2014 to study how entire communities of microbes colonize and influence cacao plants (Theobroma cacao). Christian suspected that the prime source of a young cacao tree’s microbiome would be the dead and decaying leaves on the rainforest or orchard floor.
To test this hunch and see what kind of protection microbes picked up from leaf litter might offer, Christian raised fungus-free cacao seedlings in a lab incubator. When the plants reached about half a meter tall, she placed them in pots outside, surrounding some with leaf litter from a healthy cacao tree, some with litter from other kinds of trees and some with no litter at all.
After two weeks, she brought the plants back into the greenhouse to analyze their microbiomes. She found nearly 300 kinds of endophytes, which she, Mejía and colleagues reported last year in Proceedings of the Royal Society B.
The microbiome membership differed between the litter treatments. Plants in pots with either kind of leaf litter possessed less diverse microbiomes than those without litter, probably because the microbes in the litter quickly took over before stray microbes from elsewhere could settle in. These results suggest that a seedling in the shadow of more mature trees will probably accumulate the same microbiome as its towering neighbors. To see if some of those transferred microbes protect the cacao from disease-causing organisms, Christian rubbed a bit of black pod rot on the leaves of plants in each group. Three weeks later, she measured the size of the rotted spots.
Plants surrounded by cacao litter had the smallest lesions. Those with litter from other trees had slightly more damage, and plants with no litter had about double the damage of the mixed litter plants.
“Getting exposed to the litter of their mother or their own kind had a very strong beneficial effect on the resistance of these young plants,” says plant biologist Keith Clay of Tulane University in New Orleans, a coauthor of the study.
Scientists aren’t sure how the good fungi protect the plants against the rot. It may be that the beneficial fungi simply take up space in or on the leaves, leaving no room for the undesirables, Christian says. Or a protective microbe like C. tropicale might attack a pathogen via some kind of chemical warfare. In the case of cacao, she thinks the most likely explanation is that the good guys act as a sort of vaccine, priming the plant’s immune system to fight off the rot. In support of this idea, Mejía reported in 2014 in Frontiers in Microbiology that C. tropicale causes cacao to turn on defensive genes.
Cacao farmers may need to rethink their practices. The farmers normally clear leaf litter out of orchards to avoid transmitting disease-causing microbes from decaying leaves to living trees, says Christian, now a postdoc at the University of Illinois at Urbana-Champaign. But her work suggests that farmers might do well to at least hold on to litter from healthy trees.
Crop questions Litter is a low-tech way to spread entire communities of microbes — good and bad. But agricultural companies want to grab only the good microbes and apply them to crops. The hunt for the good guys starts with a stroll through a crop field, says Barry Goldman, vice president and head of discovery at Indigo Ag in Boston. Chances are, you’ll find bigger and hardier plants among the crowd. Within those top performers, Indigo has found endophytes that improve plant vigor and size, and others that protect against drought.
The company, working with cotton, corn, rice, soybeans and wheat, coats seeds with these microbes. Once the seeds germinate, the microbes cover the newborn leaves and can get inside via cuts in the roots or through stomata, tiny breathing holes in the leaves. The process is akin to what happens when a baby travels through the birth canal, picking up beneficial microbial partners from mom along the way. For example, the first-generation Indigo Wheat, released in 2016, starts from seeds treated with a beneficial microbe. In Kansas test fields, the treatment raised yields by 8 to 19 percent.
Farmers are also reporting improved drought tolerance. During the first six months of 2018 with only two rains, the participating Kansas farmers had given up on and plowed over fields with struggling regular wheat, but not those growing Indigo Wheat, Goldman says.
In St. Louis, NewLeaf Symbiotics is interested in bacteria of the genus Methylobacterium. These microbes, found in all plants, are known as methylotrophs because they eat methanol, which plants release as their cells grow. In return for methanol, M-trophs, as NewLeaf calls them, offer plants diverse benefits. Some deliver molecules that encourage plants to grow; others make seeds germinate earlier and more consistently, or protect against problem fungi.
NewLeaf released its first products this year, including Terrasym 401, a seed treatment for soybeans. Across four years of field trials, Terrasym 401 raised yields by more than two bushels per acre, says NewLeaf cofounder and CEO Tom Laurita. One bushel is worth about $9. On farms with thousands of acres, that adds up.
Farmers are pleased, but NewLeaf’s and Indigo’s work is hardly done. Plant microbiome companies all face similar challenges. One is the diverse environments where crops are grown. Just because Indigo Wheat thrives in Kansas doesn’t mean it will outgrow standard varieties in, say, North Dakota. “The big ask for the next-gen ag biotech companies like AgBiome or Indigo … is whether the products will deliver as advertised over a range of field conditions,” Dangl says.
Another issue is that crop fields and plants already have microbiomes. “We’re asking a lot of a microbe, or a mix of microbes, to invade an already-existing ecosystem and persist there and do their job,” Dangl says. Companies will need to make sure their preferred microbes take hold.
And while scientists are well aware that diverse microbial communities cooperate to affect plant health, most companies are working with one kind of microbe at a time. Indigo isn’t yet sure how to approach entire microbiomes, Goldman says, but “we certainly are thinking hard about it.”
Researchers are beginning to address these questions by studying microbes in communities — such as Christian’s leaf-litter microbiomes — instead of as individuals. In the lab, Dangl developed a synthetic community of 188 root microbes. He can apply them to plants under stress from drought or heat, then watch how the communities respond and affect the plants.
A major aim is to identify the factors that determine microbiome membership. What decides who gets a spot on a given plant? How does the plant species and its local environment affect the microbiome? How do plants welcome friendlies and eject hostiles? “This is a huge area of importance,” Dangl says.
There’s some risk in adding microbes to crops while these questions are still unanswered, Mejía cautions. Microbes that are beneficial in one situation could be harmful in other plants or different environments. It’s not a far-fetched scenario: There’s a fungal endophyte of a South American palm tree that staves off beetle infestations when the trees are in the shade. Under the sun, however, the fungus turns nasty, spewing hydrogen peroxide that kills plant tissues.
And although C. tropicale benefits cacao, the genus has a dark side: Many species of Colletotrichum can cause leaf lesions and rotted fruit or flower spots in a variety of plants ranging from avocados to zinnias. Microbes for conservation Back in Hawaii, after that disheartening hike to the P. kaalaensis graveyard, Zahn pondered how to protect native plants in wild environments such as Oahu’s mountains.
In people, Zahn considered, antibiotics can damage normal gut microbe populations, leaving a person vulnerable to infection by harmful microbes. P. kaalaensis got similar treatment in the greenhouse, where it received regular dosing of fungicide. In retrospect, Zahn realized, that treatment probably left the plants bereft of their natural microbiome and weakened their immune systems, leaving them vulnerable to mildew infection once dropped into the jungle.
For people on antibiotics, probiotics — beneficial bacteria — can help restore balance. Zahn thought a similar strategy, a sort of plant probiotic, could help protect P. kaalaensis in future attempts at moving it outside.
For a probiotic, Zahn looked to a P. kaalaensis cousin, Phyllostegia hirsuta, which can survive in the wild. He put P. hirsuta leaves in a blender and sprayed the slurry over P. kaalaensis growing in an incubator.
Then, Zahn placed a leaf infected with powdery mildew into the incubator’s air intake. The mint plants treated with the P. hirsuta slurry experienced delayed, less severe infections compared with untreated plants, Zahn and Amend, also at the University of Hawaii at Manoa, reported last year in PeerJ. The probiotic had worked.
Zahn used DNA sequencing to identify the microbes in the slurry. Many of the microbiome members probably benefit P. kaalaensis, but he thinks he’s found a major protector: a yeast called Pseudozyma aphidis that lives on leaves. “This yeast normally just passively absorbs nutrients from the environment,” Zahn says. “But given the right victim, it will turn into a vicious spaghetti monster.” When mildew spores land nearby, the yeast grows tentacle-like filaments that appear to envelop and feed on the mildew.
Emboldened by his results, Zahn trekked back to the jungle and planted six slurry-treated plants in April 2016. They survived for about two years, but by May 2018, they were all dead. “It was still a huge win,” says Nicole Hynson, a community ecologist also at Manoa. After all, P. kaalaensis without probiotics last only months. And the probiotics approach might apply beyond one little Hawaiian mint, Hynson adds: “We’re really at the beginning of thinking how we might use the microbiome to address plant restoration.”
Zahn has since moved to Utah Valley University in Orem, where he’s hoping to help endangered cacti with microbes. Meanwhile, he’s left the Phyllostegia project in the hands of Jerry Koko, a graduate student in Hynson’s lab. Koko is studying how the yeast and some root-based fungi protect the plant.
Hynson says their goal is to build “a superplant.” With probiotics on both roots and shoots, an enhanced P. kaalaensis should be well-equipped to grow strong and resist mildew. In greenhouse experiments so far, Koko says, the plants with both types of beneficial fungi seem to sport fewer, smaller powdery mildew patches than plants that received no probiotic treatment.
While the restoration of a little flowering plant, or a few more bushels of soybeans, may seem like small victories, they could herald big things for plant microbiomes in conservation as well as agriculture. The farmers and conservationists of the future may find themselves seeding and tending not just plants, but their microscopic helpers, too.
THE WOODLANDS, TEXAS — A young, ultrabright Jupiter may have desiccated its now hellish moon Io. The planet’s bygone brilliance could have also vaporized water on Europa and Ganymede, planetary scientist Carver Bierson reported March 17 at the Lunar and Planetary Science Conference. If true, the findings could help researchers narrow the search for icy exomoons by eliminating unlikely orbits.
Jupiter is among the brightest specks in our night sky. But past studies have indicated that during its infancy, Jupiter was far more luminous. “About 10 thousand times more luminous,” said Bierson, of Arizona State University in Tempe. That radiance would have been inescapable for the giant planet’s moons, the largest of which are volcanic Io, ice-shelled Europa, aurora-cowled Ganymede and crater-laden Callisto (SN: 12/22/22, SN: 4/19/22, SN: 3/12/15). The constitutions of these four bodies obey a trend: The more distant the moon from Jupiter, the more ice-rich its body is.
Bierson and his colleagues hypothesized this pattern was a legacy of Jupiter’s past radiance. The team used computers to simulate how an infant Jupiter may have warmed its moons, starting with Io, the closest of the four. During its first few million years, Io’s surface temperature may have exceeded 26° Celsius under Jupiter’s glow, Bierson said. “That’s Earthlike temperatures.”
Any ice present on Io at that time, roughly 4.5 billion years ago, probably would have melted into an ocean. That water would have progressively evaporated into an atmosphere. And that atmosphere, hardly restrained by the moon’s weak gravity, would have readily escaped into space. In just a few million years, Io could have lost as much water as Ganymede may hold today, which may be more than 25 times the amount in Earth’s oceans.
A coruscant Jupiter probably didn’t remove significant amounts of ice from Europa or Ganymede, the researchers found, unless Jupiter was brighter than simulated or the moons orbited closer than they do today.
The findings suggest that icy exomoons probably don’t orbit all that close to massive planets.
Art historians often wish that Renaissance painters could shell out secrets of the craft. Now, scientists may have cracked one using chemistry and physics.
Around the turn of the 15th century in Italy, oil-based paints replaced egg-based tempera paints as the dominant medium. During this transition, artists including Leonardo da Vinci and Sandro Botticelli also experimented with paints made from oil and egg (SN: 4/30/14). But it has been unclear how adding egg to oil paints may have affected the artwork. “Usually, when we think about art, not everybody thinks about the science which is behind it,” says chemical engineer Ophélie Ranquet of the Karlsruhe Institute of Technology in Germany.
In the lab, Ranquet and colleagues whipped up two oil-egg recipes to compare with plain oil paint. One mixture contained fresh egg yolk mixed into oil paint, and had a similar consistency to mayonnaise. For the other blend, the scientists ground pigment into the yolk, dried it and mixed it with oil — a process the old masters might have used, according to the scant historical records that exist today. Each medium was subjected to a battery of tests that analyzed its mass, moisture, oxidation, heat capacity, drying time and more.
In both concoctions, the yolk’s proteins, phospholipids and antioxidants helped slow paint oxidation, which can cause paint to turn yellow over time, the team reports March 28 in Nature Communications.
In the mayolike blend, the yolk created sturdy links between pigment particles, resulting in stiffer paint. Such consistency would have been ideal for techniques like impasto, a raised, thick style that adds texture to art. Egg additions also could have reduced wrinkling by creating a firmer paint consistency. Wrinkling sometimes happens with oil paints when the top layer dries faster than the paint underneath, and the dried film buckles over looser, still-wet paint.
The hybrid mediums have some less than eggs-ellent qualities, though. For instance, the eggy oil paint can take longer to dry. If paints were too yolky, Renaissance artists would have had to wait a long time to add the next layer, Ranquet says.
“The more we understand how artists select and manipulate their materials, the more we can appreciate what they’re doing, the creative process and the final product,” says Ken Sutherland, director of scientific research at the Art Institute of Chicago, who was not involved with the work.
Research on historical art mediums can not only aid art preservation efforts, Sutherland says, but also help people gain a deeper understanding of the artworks themselves.
In a book about his travels in Africa published in 1907, British explorer Arnold Henry Savage Landor recounted witnessing an impromptu meal that his companions relished but that he found unimaginably revolting.
As he coasted down a river in the Congo Basin with several local hunter-gatherers, a dead rodent floated near their canoe. Its decomposing body had bloated to the size of a small pig.
Stench from the swollen corpse left Landor gasping for breath. Unable to speak, he tried to signal his companions to steer the canoe away from the fetid creature. Instead, they hauled the supersize rodent aboard and ate it. “The odour when they dug their knives into it was enough to kill the strongest of men,” Landor wrote. “When I recovered, my admiration for the digestive powers of these people was intense. They were smacking their lips and they said the [rodent] had provided most excellent eating.”
Starting in the 1500s, European and then later American explorers, traders, missionaries, government officials and others who lived among Indigenous peoples in many parts of the world wrote of similar food practices. Hunter-gatherers and small-scale farmers everywhere commonly ate putrid meat, fish and fatty parts of a wide range of animals. From arctic tundra to tropical rainforests, native populations consumed rotten remains, either raw, fermented or cooked just enough to singe off fur and create a more chewable texture. Many groups treated maggots as a meaty bonus.
Descriptions of these practices, which still occur in some present-day Indigenous groups and among northern Europeans who occasionally eat fermented fish, aren’t likely to inspire any new Food Network shows or cookbooks from celebrity chefs.
Case in point: Some Indigenous communities feasted on huge decomposing beasts, including hippos that had been trapped in dug-out pits in Africa and beached whales on Australia’s coast. Hunters in those groups typically smeared themselves with the fat of the animal before gorging on greasy innards. After slicing open animals’ midsections, both adults and children climbed into massive, rotting body cavities to remove meat and fat.
Or consider that Native Americans in Missouri in the late 1800s made a prized soup from the greenish, decaying flesh of dead bison. Animal bodies were buried whole in winter and unearthed in spring after ripening enough to achieve peak tastiness.
But such accounts provide a valuable window into a way of life that existed long before Western industrialization and the war against germs went global, says anthropological archaeologist John Speth of the University of Michigan in Ann Arbor. Intriguingly, no reports of botulism and other potentially fatal reactions to microorganisms festering in rotting meat appear in writings about Indigenous groups before the early 1900s. Instead, decayed flesh and fat represented valued and tasty parts of a healthy diet. Many travelers such as Landor considered such eating habits to be “disgusting.” But “a gold mine of ethnohistorical accounts makes it clear that the revulsion Westerners feel toward putrid meat and maggots is not hardwired in our genome but is instead culturally learned,” Speth says.
This dietary revelation also challenges an influential scientific idea that cooking originated among our ancient relatives as a way to make meat more digestible, thus providing a rich calorie source for brain growth in the Homo genus. It’s possible, Speth argues, that Stone Age hominids such as Neandertals first used cooking for certain plants that, when heated, provided an energy-boosting, carbohydrate punch to the diet. Animals held packets of fat and protein that, after decay set in, rounded out nutritional needs without needing to be heated. Putrid foods in the diets of Indigenous peoples Speth’s curiosity about a human taste for putrid meat was originally piqued by present-day hunter-gatherers in polar regions. North American Inuit, Siberians and other far-north populations still regularly eat fermented or rotten meat and fish.
Fermented fish heads, also known as “stinkhead,” are one popular munchy among northern groups. Chukchi herders in the Russian Far East, for instance, bury whole fish in the ground in early fall and let the bodies naturally ferment during periods of freezing and thawing. Fish heads the consistency of hard ice cream are then unearthed and eaten whole.
Speth has suspected for several decades that consumption of fermented and putrid meat, fish, fat and internal organs has a long and probably ancient history among northern Indigenous groups. Consulting mainly online sources such as Google Scholar and universities’ digital library catalogs, he found many ethnohistorical descriptions of such behavior going back to the 1500s. Putrid walrus, seals, caribou, reindeer, musk oxen, polar bears, moose, arctic hares and ptarmigans had all been fair game. Speth reported much of this evidence in 2017 in PaleoAnthropology.
In one recorded incident from late-1800s Greenland, a well-intentioned hunter brought what he had claimed in advance was excellent food to a team led by American explorer Robert Peary. A stench filled the air as the hunter approached Peary’s vessel carrying a rotting seal dripping with maggots. The Greenlander had found the seal where a local group had buried it, possibly a couple of years earlier, so that the body could reach a state of tasty decomposition. Peary ordered the man to keep the reeking seal off his boat.
Miffed at this unexpected rejection, the hunter “told us that the more decayed the seal the finer the eating, and he could not understand why we should object,” Peary’s wife wrote of the encounter.
Even in temperate and tropical areas, where animal bodies decompose within hours or days, Indigenous peoples have appreciated rot as much as Peary’s seal-delivery man did. Speth and anthropological archaeologist Eugène Morin of Trent University in Peterborough, Canada, described some of those obscure ethnohistorical accounts last October in PaleoAnthropology. Early hominids may have scavenged rotten meat These accounts undermine some of scientists’ food-related sacred cows, Speth says. For instance, European explorers and other travelers consistently wrote that traditional groups not only ate putrid meat raw or lightly cooked but suffered no ill aftereffects. A protective gut microbiome may explain why, Speth suspects. Indigenous peoples encountered a variety of microorganisms from infancy on, unlike people today who grow up in sanitized settings. Early exposures to pathogens may have prompted the development of an array of gut microbes and immune responses that protected against potential harms of ingesting putrid meat.
That idea requires further investigation; little is known about the bacterial makeup of rotten meat eaten by traditional groups or of their gut microbiomes. But studies conducted over the last few decades do indicate that putrefaction, the process of decay, offers many of cooking’s nutritional benefits with far less effort. Putrefaction predigests meat and fish, softening the flesh and chemically breaking down proteins and fats so they are more easily absorbed and converted to energy by the body.
Given the ethnohistorical evidence, hominids living 3 million years ago or more could have scavenged meat from decomposing carcasses, even without stone tools for hunting or butchery, and eaten their raw haul safely long before fire was used for cooking, Speth contends. If simple stone tools appeared as early as 3.4 million years ago, as some researchers have controversially suggested, those implements may have been made by hominids seeking raw meat and marrow (SN: 9/11/10, p. 8). Researchers suspect regular use of fire for cooking, light and warmth emerged no earlier than around 400,000 years ago (SN: 5/5/12, p. 18).
“Recognizing that eating rotten meat is possible, even without fire, highlights how easy it would have been to incorporate scavenged food into the diet long before our ancestors learned to hunt or process [meat] with stone tools,” says paleoanthropologist Jessica Thompson of Yale University.
Thompson and colleagues suggested in Current Anthropology in 2019 that before about 2 million years ago, hominids were primarily scavengers who used rocks to smash open animal bones and eat nutritious, fat-rich marrow and brains. That conclusion, stemming from a review of fossil and archaeological evidence, challenged a common assumption that early hominids — whether as hunters or scavengers — primarily ate meat off the bone.
Certainly, ancient hominids were eating more than just the meaty steaks we think of today, says archaeologist Manuel Domínguez-Rodrigo of Rice University in Houston. In East Africa’s Olduvai Gorge, butchered animal bones at sites dating to nearly 2 million years ago indicate that hominids ate most parts of carcasses, including brains and internal organs.
“But Speth’s argument about eating putrid carcasses is very speculative and untestable,” Domínguez-Rodrigo says.
Untangling whether ancient hominids truly had a taste for rot will require research that spans many fields, including microbiology, genetics and food science, Speth says.
But if his contention holds up, it suggests that ancient cooks were not turning out meat dishes. Instead, Speth speculates, cooking’s primary value at first lay in making starchy and oily plants softer, more chewable and easily digestible. Edible plants contain carbohydrates, sugar molecules that can be converted to energy in the body. Heating over a fire converts starch in tubers and other plants to glucose, a vital energy source for the body and brain. Crushing or grinding of plants might have yielded at least some of those energy benefits to hungry hominids who lacked the ability to light fires.
Whether hominids controlled fire well enough to cook plants or any other food regularly before around 400,000 to 300,000 years ago is unknown. Neandertals may have hunted animals for fat Despite their nutritional benefits, plants often get viewed as secondary menu items for Stone Age folks. It doesn’t help that plants preserve poorly at archaeological sites.
Neandertals, in particular, have a long-standing reputation as plant shunners. Popular opinion views Neandertals as burly, shaggy individuals who huddled around fires chomping on mammoth steaks.
That’s not far from an influential scientific view of what Neandertals ate. Elevated levels of a diet-related form of nitrogen in Neandertal bones and teeth hint that they were committed carnivores, eating large amounts of protein-rich lean meat, several research teams have concluded over nearly the last 30 years.
But consuming that much protein from meat, especially from cuts above the front and hind limbs now referred to as steaks, would have been a recipe for nutritional disaster, Speth argues. Meat from wild, hoofed animals and smaller creatures such as rabbits contains almost no fat, or marbling, unlike meat from modern domestic animals, he says. Ethnohistorical accounts, especially for northern hunters including the Inuit, include warnings about weight loss, ill health and even death that can result from eating too much lean meat.
This form of malnutrition is known as rabbit starvation. Evidence indicates that people can safely consume between about 25 and 35 percent of daily calories as protein, Speth says. Above that threshold, several investigations have indicated that the liver becomes unable to break down chemical wastes from ingested proteins, which then accumulate in the blood and contribute to rabbit starvation. Limits to the amount of daily protein that can be safely consumed meant that ancient hunting groups, like those today, needed animal fats and carbohydrates from plants to fulfill daily calorie and other nutritional needs.
Modern “Paleo diets” emphasize eating lean meats, fruits and vegetables. But that omits what past and present Indigenous peoples most wanted from animal carcasses. Accounts describe Inuit people eating much larger amounts of fatty body parts than lean meat, Speth says. Over the last few centuries, they have favored tongue, fat deposits, brisket, ribs, fatty tissue around intestines and internal organs, and marrow. Internal organs, especially adrenal glands, have provided vitamin C — nearly absent in lean muscle — that prevented anemia and other symptoms of scurvy.
Western explorers noted that the Inuit also ate chyme, the stomach contents of reindeer and other plant-eating animals. Chyme provided at least a side course of plant carbohydrates. Likewise, Neandertals in Ice Age Europe probably subsisted on a fat- and chyme-supplemented diet (SN Online: 10/11/13), Speth contends.
Large numbers of animal bones found at northern European Neandertal sites — often viewed as the residue of ravenous meat eaters — may instead reflect overhunting of animals to obtain enough fat to meet daily calorie needs. Because wild game typically has a small percentage of body fat, northern hunting groups today and over the last few centuries frequently killed prey in large numbers, either discarding most lean meat from carcasses or feeding it to their dogs, ethnographic studies show.
If Neandertals followed that playbook, eating putrid foods might explain why their bones carry a carnivore-like nitrogen signature, Speth suggests. An unpublished study of decomposing human bodies kept at a University of Tennessee research facility in Knoxville called the Body Farm tested that possibility. Biological anthropologist Melanie Beasley, now at Purdue University in West Lafayette, Ind., found moderately elevated tissue nitrogen levels in 10 deceased bodies sampled regularly for about six months. Tissue from those bodies served as a stand-in for animal meat consumed by Neandertals. Human flesh is an imperfect substitute for, say, reindeer or elephant carcasses. But Beasley’s findings suggest that decomposition’s effects on a range of animals need to be studied. Intriguingly, she also found that maggots in the decaying tissue displayed extremely elevated nitrogen levels.
Paleobiologist Kimberly Foecke of George Washington University in Washington, D.C., has also found high nitrogen levels in rotting, maggot-free cuts of beef from animals fed no hormones or antibiotics to approximate the diets of Stone Age creatures (SN: 1/2/19).
Like arctic hunters did a few hundred years ago, Neandertals may have eaten putrid meat and fish studded with maggots, Speth says. That would explain elevated nitrogen levels in Neandertal fossils.
But Neandertal dining habits are poorly understood. Unusually extensive evidence of Neandertal big-game consumption has come from a new analysis of fossil remains at a roughly 125,000-year-old site in northern Germany called Neumark-Nord. There, Neandertals periodically hunted straight-tusked elephants weighing up to 13 metric tons, say archaeologist Sabine Gaudzinski-Windheuser of Johannes Gutenberg University of Mainz in Germany and colleagues.
In a study reported February 1 in Science Advances, her group analyzed patterns of stone-tool incisions on bones of at least 57 elephants from 27 spots near an ancient lake basin where Neandertals lit campfires and constructed shelters (SN: 1/29/22, p. 8). Evidence suggests that Neandertal butchers — much like Inuit hunters — removed fat deposits under the skin and fatty body parts such as the tongue, internal organs, brain and thick layers of fat in the feet. Lean meat from elephants would have been eaten in smaller quantities to avoid rabbit starvation, the researchers argue.
Further research needs to examine whether the Neandertals cooked elephant meat or boiled the bones to extract nutritious grease, Speth says. Mealtime options would have expanded for hominids who could not only consume putrid meat and fat but also heat animal parts over fires, he suspects.
Neandertals who hunted elephants must also have eaten a variety of plants to meet their considerable energy requirements, says Gaudzinski-Windheuser. But so far, only fragments of burned hazelnuts, acorns and blackthorn plums have been found at Neumark-Nord. Neandertals probably carb-loaded Better evidence of Neandertals’ plant preferences comes from sites in warm Mediterranean and Middle Eastern settings. At a site in coastal Spain, Neandertals probably ate fruits, nuts and seeds of a variety of plants (SN: 3/27/21, p. 32).
Neandertals in a range of environments must have consumed lots of starchy plants, argues archaeologist Karen Hardy of the University of Glasgow in Scotland. Even Stone Age northern European and Asian regions included plants with starch-rich appendages that grew underground, such as tubers.
Neandertals could also have obtained starchy carbs from the edible, inner bark of many trees and from seaweed along coastlines. Cooking, as suggested by Speth, would have greatly increased the nutritional value of plants, Hardy says. Not so for rotten meat and fat, though Neandertals such as those at Neumark-Nord may have cooked what they gleaned from fresh elephant remains.
There is direct evidence that Neandertals munched on plants. Microscopic remnants of edible and medicinal plants have been found in the tartar on Neandertal teeth (SN: 4/1/17, p. 16), Hardy says.
Carbohydrate-fueled energy helped to maintain large brains, enable strenuous physical activity and ensure healthy pregnancies for both Neandertals and ancient Homo sapiens, Hardy concludes in the January 2022 Journal of Human Evolution. (Researchers disagree over whether Neandertals, which lived from around 400,000 to 40,000 years ago, were a variant of H. sapiens or a separate species.) Paleo cuisine was tasty Like Hardy, Speth suspects that plants provided a large share of the energy and nutrients Stone Age folks needed. Plants represented a more predictable, readily available food source than hunted or scavenged meat and fat, he contends.
Plants also offered Neandertals and ancient H. sapiens — whose diets probably didn’t differ dramatically from Neandertals’, Hardy says — a chance to stretch their taste buds and cook up tangy meals.
Paleolithic plant cooking included preplanned steps aimed at adding dashes of specific flavors to basic dishes, a recent investigation suggests. In at least some places, Stone Age people apparently cooked to experience pleasing tastes and not just to fill their stomachs. Charred plant food fragments from Shanidar Cave in Iraqi Kurdistan and Franchthi Cave in Greece consisted of crushed pulse seeds, possibly from starchy pea species, combined with wild plants that would have provided a pungent, somewhat bitter taste, microscopic analyses show.
Added ingredients included wild mustard, wild almonds, wild pistachio and fruits such as hackberry, archaeobotanist Ceren Kabukcu of the University of Liverpool in England and colleagues reported last November in Antiquity.
Four Shanidar food bits date to about 40,000 years ago or more and originated in sediment that included stone tools attributed to H. sapiens. Another food fragment, likely from a cooked Neandertal meal, dates to between 70,000 and 75,000 years ago. Neandertal fossils found in Shanidar Cave are also about 70,000 years old. So it appears that Shanidar Neandertals spiced up cooked plant foods before Shanidar H. sapiens did, Kabukcu says.
Franchthi food remains date to between 13,100 and 11,400 years ago, when H. sapiens lived there. Wild pulses in food from both caves display microscopic signs of having been soaked, a way to dilute poisons in seeds and moderate their bitterness.
These new findings “suggest that cuisine, or the combination of different ingredients for pleasure, has a very long history indeed,” says Hardy, who was not part of Kabukcu’s team.
There’s a hefty dollop of irony in the possibility that original Paleo diets mixed what people in many societies today regard as gross-sounding portions of putrid meat and fat with vegetarian dishes that still seem appealing.
Beer lovers could be left with a sour taste, thanks to the latest in a series of studies mapping the effects of climate change on crops.
Malted barley — a key ingredient in beer including IPAs, stouts and pilsners — is particularly sensitive to warmer temperatures and drought, both of which are likely to increase due to climate change. As a result, average global barley crop yields could drop as much as 17 percent by 2099, compared with the average yield from 1981 to 2010, under the more extreme climate change projections, researchers report October 15 in Nature Plants. That decline “could lead to, on average, a doubling of price in some countries,” says coauthor Steven Davis, an Earth systems scientist at University of California, Irvine. Consumption would also drop globally by an average of 16 percent, or roughly what people in the United States consumed in 2011.
The results are based on computer simulations projecting climate conditions, plant responses and global market reactions up to the year 2099. Under the mildest climate change predictions, world average barley yields would still go down by at least 3 percent, and average prices would increase about 15 percent, the study says.
Other crops such as maize, wheat and soy and wine grapes are also threatened by the global rising of average atmospheric temperatures as well as by pests emboldened by erratic weather (SN: 2/8/14, p. 3). But there’s still hope for ale aficionados. The study did not account for technological innovations or genetic tweaks that could spare the crop, Davis says.