How earthquake scientists eavesdrop on North Korea’s nuclear blasts

On September 9 of last year, in the middle of the morning, seismometers began lighting up around East Asia. From South Korea to Russia to Japan, geophysical instruments recorded squiggles as seismic waves passed through and shook the ground. It looked as if an earthquake with a magnitude of 5.2 had just happened. But the ground shaking had originated at North Korea’s nuclear weapons test site.

It was the fifth confirmed nuclear test in North Korea, and it opened the latest chapter in a long-running geologic detective story. Like a police examiner scrutinizing skid marks to figure out who was at fault in a car crash, researchers analyze seismic waves to determine if they come from a natural earthquake or an artificial explosion. If the latter, then scientists can also tease out details such as whether the blast was nuclear and how big it was. Test after test, seismologists are improving their understanding of North Korea’s nuclear weapons program.
The work feeds into international efforts to monitor the Comprehensive Nuclear-Test-Ban Treaty, which since 1996 has banned nuclear weapons testing. More than 180 countries have signed the treaty. But 44 countries that hold nuclear technology must both sign and ratify the treaty for it to have the force of law. Eight, including the United States and North Korea, have not.

To track potential violations, the treaty calls for a four-pronged international monitoring system, which is currently about 90 percent complete. Hydroacoustic stations can detect sound waves from underwater explosions. Infrasound stations listen for low-frequency sound waves rumbling through the atmosphere. Radio­nuclide stations sniff the air for the radioactive by-products of an atmospheric test. And seismic stations pick up the ground shaking, which is usually the fastest and most reliable method for confirming an underground explosion.

Seismic waves offer extra information about an explosion, new studies show. One research group is exploring how local topography, like the rugged mountain where the North Korean government conducts its tests, puts its imprint on the seismic signals. Knowing that, scientists can better pinpoint where the explosions are happening within the mountain — thus improving understanding of how deep and powerful the blasts are. A deep explosion is more likely to mask the power of the bomb.
Separately, physicists have conducted an unprecedented set of six explosions at the U.S. nuclear test site in Nevada. The aim was to mimic the physics of a nuclear explosion by detonating chemical explosives and watching how the seismic waves radiate outward. It’s like a miniature, nonnuclear version of a nuclear weapons test. Already, the scientists have made some key discoveries, such as understanding how a deeply buried blast shows up in the seismic detectors.
The more researchers can learn about the seismic calling card of each blast, the more they can understand international developments. That’s particularly true for North Korea, where leaders have been ramping up the pace of military testing since the first nuclear detonation in 2006. On July 4, the country launched its first confirmed ballistic missile — with no nuclear payload — that could reach as far as Alaska.

“There’s this building of knowledge that helps you understand the capabilities of a country like North Korea,” says Delaine Reiter, a geophysicist with Weston Geophysical Corp. in Lexington, Mass. “They’re not shy about broadcasting their testing, but they claim things Western scientists aren’t sure about. Was it as big as they claimed? We’re really interested in understanding that.”

Natural or not
Seismometers detect ground shaking from all sorts of events. In a typical year, anywhere from 1,200 to 2,200 earthquakes of magnitude 5 and greater set off the machines worldwide. On top of that is the unnatural shaking: from quarry blasts, mine collapses and other causes. The art of using seismic waves to tell one type of event from the others is known as forensic seismology.

Forensic seismologists work to distinguish a natural earthquake from what could be a clandestine nuclear test. In March 2003, for instance, seismometers detected a disturbance coming from near Lop Nor, a dried-up lake in western China that the Chinese government, which signed but hasn’t ratified the test ban treaty, has used for nuclear tests. Seismologists needed to figure out immediately what had happened.

One test for telling the difference between an earthquake and an explosion is how deep it is. Anything deeper than about 10 kilometers is almost certain to be natural. In the case of Lop Nor, the source of the waves seemed to be located about six kilometers down — difficult to tunnel to, but not impossible. Researchers also used a second test, which compares the amplitudes of two different kinds of seismic waves.

Earthquakes and explosions generate several types of seismic waves, starting with P, or primary, waves. These waves are the first to arrive at a distant station. Next come S, or secondary, waves, which travel through the ground in a shearing motion, taking longer to arrive. Finally come waves that ripple across the surface, including those called Rayleigh waves.
In an explosion as compared with an earthquake, the amplitudes of Rayleigh waves are smaller than those of the P waves. By looking at those two types of waves, scientists determined the Lop Nor incident was a natural earthquake, not a secretive explosion. (Seismology cannot reveal the entire picture. Had the Lop Nor event actually been an explosion, researchers would have needed data from the radionuclide monitoring network to confirm the blast came from nuclear and not chemical explosives.)

For North Korea, the question is not so much whether the government is setting off nuclear tests, but how powerful and destructive those blasts might be. In 2003, the country withdrew from the Treaty on the Nonproliferation of Nuclear Weapons, an international agreement distinct from the testing ban that aims to prevent the spread of nuclear weapons and related technology. Three years later, North Korea announced it had conducted an underground nuclear test in Mount Mantap at a site called Punggye-ri, in the northeastern part of the country. It was the first nuclear weapons test since India and Pakistan each set one off in 1998.

By analyzing seismic wave data from monitoring stations around the region, seismologists concluded the North Korean blast had come from shallow depths, no more than a few kilometers within the mountain. That supported the North Korean government’s claim of an intentional test. Two weeks later, a radionuclide monitoring station in Yellowknife, Canada, detected increases in radioactive xenon, which presumably had leaked out of the underground test site and drifted eastward. The blast was nuclear.

But the 2006 test raised fresh questions for seismologists. The ratio of amplitudes of the Rayleigh and P waves was not as distinctive as it usually is for an explosion. And other aspects of the seismic signature were also not as clear-cut as scientists had expected.

Researchers got some answers as North Korea’s testing continued. In 2009, 2013 and twice in 2016, the government set off more underground nuclear explosions at Punggye-ri. Each time, researchers outside the country compared the seismic data with the record of past nuclear blasts. Automated computer programs “compare the wiggles you see on the screen ripple for ripple,” says Steven Gibbons, a seismologist with the NORSAR monitoring organization in Kjeller, Norway. When the patterns match, scientists know it is another test. “A seismic signal generated by an explosion is like a fingerprint for that particular region,” he says.

With each test, researchers learned more about North Korea’s capabilities. By analyzing the magnitude of the ground shaking, experts could roughly calculate the power of each test. The 2006 explosion was relatively small, releasing energy equivalent to about 1,000 tons of TNT — a fraction of the 15-kiloton bomb dropped by the United States on Hiroshima, Japan, in 1945. But the yield of North Korea’s nuclear tests crept up each time, and the most recent test, in September 2016, may have exceeded the size of the Hiroshima bomb.
Digging deep
For an event of a particular seismic magnitude, the deeper the explosion, the more energetic the blast. A shallow, less energetic test can look a lot like a deeply buried, powerful blast. Scientists need to figure out precisely where each explosion occurred.

Mount Mantap is a rugged granite mountain with geology that complicates the physics of how seismic waves spread. Western experts do not know exactly how the nuclear bombs are placed inside the mountain before being detonated. But satellite imagery shows activity that looks like tunnels being dug into the mountainside. The tunnels could be dug two ways: straight into the granite or spiraled around in a fishhook pattern to collapse and seal the site after a test, Frank Pabian, a nonproliferation expert at Los Alamos National Laboratory in New Mexico, said in April in Denver at a meeting of the Seismological Society of America.

Researchers have been trying to figure out the relative locations of each of the five tests. By comparing the amplitudes of the P, S and Rayleigh waves, and calculating how long each would have taken to travel through the ground, researchers can plot the likely sites of the five blasts. That allows them to better tie the explosions to the infrastructure on the surface, like the tunnels spotted in satellite imagery.

One big puzzle arose after the 2009 test. Analyzing the times that seismic waves arrived at various measuring stations, one group calculated that the test occurred 2.2 kilometers west of the first blast. Another scientist found it only 1.8 kilometers away. The difference may not sound like a lot, Gibbons says, but it “is huge if you’re trying to place these relative locations within the terrain.” Move a couple of hundred meters to the east or west, and the explosion could have happened beneath a valley as opposed to a ridge — radically changing the depth estimates, along with estimates of the blast’s power.

Gibbons and colleagues think they may be able to reconcile these different location estimates. The answer lies in which station the seismic data come from. Studies that rely on data from stations within about 1,500 kilometers of Punggye-ri — as in eastern China — tend to estimate bigger distances between the locations of the five tests when compared with studies that use data from more distant seismic stations in Europe and elsewhere. Seismic waves must be leaving the test site in a more complicated way than scientists had thought, or else all the measurements would agree.
When Gibbons’ team corrected for the varying distances of the seismic data, the scientists came up with a distance of 1.9 kilometers between the 2006 and 2009 blasts. The team also pinpointed the other explosions as well. The September 2016 test turned out to be almost directly beneath the 2,205-meter summit of Mount Mantap, the group reported in January in Geophysical Journal International. That means the blast was, indeed, deeply buried and hence probably at least as powerful as the Hiroshima bomb for it to register as a magnitude 5.2 earthquake.

Other seismologists have been squeezing information out of the seismic data in a different way — not in how far the signals are from the test blast, but what they traveled through before being detected. Reiter and Seung-Hoon Yoo, also of Weston Geophysical, recently analyzed data from two seismic stations, one 370 kilometers to the north in China and the other 306 kilometers to the south in South Korea.

The scientists scrutinized the moments when the seismic waves arrived at the stations, in the first second of the initial P waves, and found slight differences between the wiggles recorded in China and South Korea, Reiter reported at the Denver conference. Those in the north showed a more energetic pulse rising from the wiggles in the first second; the southern seismic records did not. Reiter and Yoo think this pattern represents an imprint of the topography at Mount Mantap.

“One side of the mountain is much steeper,” Reiter explains. “The station in China was sampling the signal coming through the steep side of the mountain, while the southern station was seeing the more shallowly dipping face.” This difference may also help explain why data from seismic stations spanning the breadth of Japan show a slight difference from north to south. Those differences may reflect the changing topography as the seismic waves exited Mount Mantap during the test.

Learning from simulations
But there is only so much scientists can do to understand explosions they can’t get near. That’s where the test blasts in Nevada come in.

The tests were part of phase one of the Source Physics Experiment, a $40-million project run by the U.S. Department of Energy’s National Nuclear Security Administration. The goal was to set off a series of chemical explosions of different sizes and at different depths in the same borehole and then record the seismic signals on a battery of instruments. The detonations took place at the nuclear test site in southern Nevada, where between 1951 and 1992 the U.S. government set off 828 underground nuclear tests and 100 atmospheric ones, whose mushroom clouds were seen from Las Vegas, 100 kilometers away.

For the Source Physics Experiment, six chemical explosions were set off between 2011 and 2016, ranging up to 5,000 kilograms of TNT equivalent and down to 87 meters deep. The biggest required high-energy–density explosives packed into a cylinder nearly a meter across and 6.7 meters long, says Beth Dzenitis, an engineer at Lawrence Livermore National Laboratory in California who oversaw part of the field campaign. Yet for all that firepower, the detonation barely registered on anything other than the instruments peppering the ground. “I wish I could tell you all these cool fireworks go off, but you don’t even know it’s happening,” she says.

The explosives were set inside granite rock, a material very similar to the granite at Mount Mantap. So the seismic waves racing outward behaved very much as they might at the North Korean nuclear test site, says William Walter, head of geophysical monitoring at Livermore. The underlying physics, describing how seismic energy travels through the ground, is virtually the same for both chemical and nuclear blasts.
The results revealed flaws in the models that researchers have been using for decades to describe how seismic waves travel outward from explosions. These models were developed to describe how the P waves compress rock as they propagate from large nuclear blasts like those set off starting in the 1950s by the United States and the Soviet Union. “That worked very well in the days when the tests were large,” Walter says. But for much smaller blasts, like those North Korea has been detonating, “the models didn’t work that well at all.”
Walter and Livermore colleague Sean Ford have started to develop new models that better capture the physics involved in small explosions. Those models should be able to describe the depth and energy release of North Korea’s tests more accurately, Walter reported at the Denver meeting.

A second phase of the Source Physics Experiment is set to begin next year at the test site, in a much more rubbly type of rock called alluvium. Scientists will use that series of tests to see how seismic waves are affected when they travel through fragmented rock as opposed to more coherent granite. That information could be useful if North Korea begins testing in another location, or if another country detonates an atomic bomb in fragmented rock.

For now, the world’s seismologists continue to watch and wait, to see what the North Korean government might do next. Some experts think the next nuclear test will come at a different location within Mount Mantap, to the south of the most recent tests. If so, that will provide a fresh challenge to the researchers waiting to unravel the story the seismic waves will tell.

“It’s a little creepy what we do,” Reiter admits. “We wait for these explosions to happen, and then we race each other to find the location, see how big it was, that kind of thing. But it has really given us a good look as to how [North Korea’s] nuclear program is progressing.” Useful information as the world’s nations decide what to do about North Korea’s rogue testing.

Neutrino experiment may hint at why matter rules the universe

A new study hints that neutrinos might behave differently than their antimatter counterparts. The result amplifies scientists’ suspicions that the lightweight elementary particles could help explain why the universe has much more matter than antimatter.

In the Big Bang, 13.8 billion years ago, matter and antimatter were created in equal amounts. To tip that balance to the universe’s current, matter-dominated state, matter and antimatter must behave differently, a concept known as CP, or “charge parity,” violation.

In neutrinos, which come in three types — electron, muon and tau — CP violation can be measured by observing how neutrinos oscillate, or change from one type to another. Researchers with the T2K experiment found that muon neutrinos morphed into electron neutrinos more often than expected, while muon antineutrinos became electron antineutrinos less often. That suggests that the neutrinos were violating CP, the researchers concluded August 4 at a colloquium at the High Energy Accelerator Research Organization, KEK, in Tsukuba, Japan.

T2K scientists had previously presented a weaker hint of CP violation. The new result is based on about twice as much data, but the evidence is still not definitive. In physicist parlance, it is a “two sigma” measurement, an indicator of how statistically strong the evidence is. Physicists usually require five sigma to claim a discovery.

Even three sigma is still far away — T2K could reach that milestone by 2026. A future experiment, DUNE, now under construction at the Sanford Underground Research Laboratory in Lead, S.D., may reach five sigma. It is worth being patient, says physicist Chang Kee Jung of Stony Brook University in New York, who is a member of the T2K collaboration. “We are dealing with really profound problems.”

A new tool could one day improve Lyme disease diagnosis

A new testing method can distinguish between early Lyme disease and a similar tick-borne illness, researchers report. The approach may one day lead to a reliable diagnostic test for Lyme, an illness that can be challenging to identify.

Using patient blood serum samples, the test accurately discerned early Lyme disease from the similar southern tick‒associated rash illness, or STARI, up to 98 times out of 100. When the comparison also included samples from healthy people, the method accurately identified early Lyme disease up to 85 times out of 100, beating a commonly used Lyme test’s rate of 44 of 100, researchers report online August 16 in Science Translational Medicine. The test relies on clues found in the rise and fall of the abundance of molecules that play a role in the body’s immune response.
“From a diagnostic perspective, this may be very helpful, eventually,” says Mark Soloski, an immunologist at Johns Hopkins Medicine who was not involved with the study. “That’s a really big deal,” he says, especially in areas such as the mid-Atlantic where Lyme and STARI overlap.

In the United States, Lyme disease is primarily caused by an infection with the bacteria Borrelia burgdorferi, which is spread by the bite of a black-legged tick. An estimated 300,000 cases of Lyme occur nationally each year. Patients usually develop a rash and fever, chills, fatigue and aches. Black-legged ticks live in the northeastern, mid-Atlantic and north-central United States, and the western black-legged tick resides along the Pacific coast.

An accurate diagnosis can be difficult early in the disease, says immunologist Paul Arnaboldi of New York Medical College in Valhalla, who was not involved in the study. Lyme disease is diagnosed based on the rash, symptoms and tick exposure. But other illnesses have similar symptoms, and the rash can be missed. A test for antibodies to the Lyme pathogen can aid diagnosis, but it works only after a patient has developed an immune response to the disease.

STARI, spread by the lone star tick, can begin with a rash and similar, though typically milder, symptoms. The pathogen responsible for STARI is still unknown, though B. burgdorferi has been ruled out. So far STARI has not been tied to arthritis or other chronic symptoms linked to Lyme, though the lone star tick has been connected to a serious allergy to red meat (SN: 8/19/17, p. 16). Parts of both ticks’ ranges overlap, adding to diagnosis difficulties.

John Belisle, a microbiologist at Colorado State University in Fort Collins, and his colleagues had previously shown that a testing method based on small molecules related to metabolism could distinguish between early Lyme disease and healthy serum samples. “Think of it as a fingerprint,” he says. The method takes note of differences in the abundancy of metabolites, such as sugars, lipids and amino acids, involved in inflammation.
In the new work, Belisle and colleagues measured differences in the levels of metabolites in serum samples from Lyme and STARI patients. The researchers then developed a “fingerprint” based on 261 small molecules to differentiate between the two illnesses. To determine the accuracy, they tested another set of samples from patients with Lyme and STARI as well as those from healthy people. “We were able to distinguish all three groups,” says Belisle.

As a diagnostic test, “I think the approach has promise,” says Arnaboldi. But more work will be necessary to see if the method can sort out early Lyme disease, STARI and other tick-borne diseases in patients with unknown illnesses.

Having information about the metabolites abundant in STARI may also help researchers learn more about this disease, says Soloski. “This is going to spur lots of future studies.”

Nitty-gritty of Homo naledi’s diet revealed in its teeth

Give Homo naledi credit for originality. The fossils of this humanlike species previously revealed an unexpectedly peculiar body plan. Now its pockmarked teeth speak to an unusually hard-edged diet.

H. naledi displays a much higher rate of chipped teeth than other members of the human evolutionary family that once occupied the same region of South Africa, say biological anthropologist Ian Towle and colleagues. Dental damage of this kind results from frequent biting and chewing on hard or gritty objects, such as raw tubers dug out of the ground, the scientists report in the September American Journal of Physical Anthropology.
“A diet containing hard and resistant foods like nuts and seeds, or contaminants such as grit, is most likely for H. naledi,” says Towle, of Liverpool John Moores University in England.

Extensive tooth chipping shows that “something unusual is going on” with H. naledi’s diet, says paleoanthropologist Peter Ungar of the University of Arkansas in Fayetteville. He directs ongoing microscopic studies of H. naledi’s teeth that may provide clues to what this novel species ate.
Grit from surrounding soil can coat nutrient-rich, underground plant parts, including tubers and roots. Regularly eating those things can cause the type of chipping found on H. naledi teeth, says paleobiologist Paul Constantino of Saint Michael’s College in Colchester, Vt. “Many animals cannot access these underground plants, but primates can, especially if they use digging sticks.”
H. naledi fossils, first found in South Africa’s subterranean Dinaledi Chamber and later a second nearby cave (SN: 6/10/17, p. 6), came from a species that lived between 236,000 and 335,000 years ago. It had a largely humanlike lower body, a relatively small brain and curved fingers suited for climbing trees.

Towle’s group studied 126 of 156 permanent H. naledi teeth found in Dinaledi Chamber. Those finds come from a minimum of 12 individuals, nine of whom had at least one chipped chopper. Two of the remaining three individuals were represented by only one tooth. Teeth excluded from the study were damaged, had not erupted above the gum surface or showed signs of having rarely been used for chewing food.

Chips appear on 56, or about 44 percent, of H. naledi teeth from Dinaledi Chamber, Towle’s team says. Half of those specimens sustained two or more chips. About 54 percent of molars and 44 percent of premolars, both found toward the back of the mouth, display at least one chip. For teeth at the front of the mouth, those figures fell to 25 percent for canines and 33 percent for incisors.

Chewing on small, hard objects must have caused all those chips, Towle says. Using teeth as tools, say to grasp animal hides, mainly damages front teeth, not cheek teeth as in H. naledi. Homemade toothpicks produce marks between teeth unlike those on the H. naledi finds.

Two South African hominids from between roughly 1 million and 3 million years ago, Australopithecus africanus and Paranthropus robustus, show lower rates of tooth chipping than H. naledi, at about 21 percent and 13 percent, respectively, the investigators find. Researchers have suspected for decades that those species ate hard or gritty foods, although ancient menus are difficult to reconstruct (SN: 6/4/11, p. 8). Little evidence exists on the extent of tooth chipping in ancient Homo species. But if H. naledi consumed underground plants, Stone Age Homo sapiens in Africa likely did as well, Constantino says.

In further tooth comparisons with living primates, baboons — consumers of underground plants and hard-shelled fruits — showed the greatest similarity to H. naledi, with fractures on 25 percent of their teeth. That figure reached only about 11 percent in gorillas and 5 percent in chimpanzees.

Human teeth found at sites in Italy, Morocco and the United States show rates and patterns of tooth fractures similar to H. naledi, he adds. Two of those sites date to between 1,000 and 1,700 years ago. The third site, in Morocco, dates to between 11,000 and 12,000 years ago. People at all three sites are suspected to have had diets unusually heavy on gritty or hard-shelled foods, the scientists say.

Chips mar 50 percent of H. naledi’s right teeth, versus 38 percent of its left teeth. That right-side tilt might signify that the Dinaledi crowd were mostly right-handers who typically placed food on the right side of their mouths. But more fossil teeth are needed to evaluate that possibility, Towle cautions.

Star that exploded in 1437 tracked to its current position

Some stars erupt like clockwork. Astronomers have tracked down a star that Korean astronomers saw explode nearly 600 years ago and confirmed that it has had more outbursts since. The finding suggests that what were thought to be three different stellar objects actually came from the same object at different times, offering new clues to the life cycles of stars.

On March 11, 1437, Korean royal astronomers saw a new “guest star” in the tail of the constellation Scorpius. The star glowed for 14 days, then faded. The event was what’s known as a classical nova explosion, which occurs when a dense stellar corpse called a white dwarf steals enough material from an ordinary companion star for its gas to spontaneously ignite. The resulting explosion can be up to a million times as bright as the sun, but unlike supernovas, classical novas don’t destroy the star.
Astronomer Michael Shara of the American Museum of Natural History in New York City and colleagues used digitized photographic plates dating from as early as 1923 to trace a modern star back to the nova. The team tracked a single star as it moved away from the center of a shell of hot gas, the remnants of an old explosion, thus showing that the star was responsible for the nova. The researchers also saw the star, which they named Nova Scorpii AD 1437, give smaller outbursts called dwarf novas in the 1930s and 1940s. The findings were reported in the Aug. 31 Nature.

The discovery fits with a proposal Shara and colleagues made in the 1980s. They suggested that three different stellar observations — bright classical nova explosions, dwarf nova outbursts and an intermediate stage where a white dwarf is not stealing enough material to erupt — are all different views of the same system.

“In biology, we might say that an egg, a larva, a pupa and a butterfly are all the same system seen at different stages of development,” Shara says.

Learning takes brain acrobatics

Peer inside the brain of someone learning. You might be lucky enough to spy a synapse pop into existence. That physical bridge between two nerve cells seals new knowledge into the brain. As new information arrives, synapses form and strengthen, while others weaken, making way for new connections.

You might see more subtle changes, too, like fluctuations in the levels of signaling molecules, or even slight boosts in nerve cell activity. Over the last few decades, scientists have zoomed in on these microscopic changes that happen as the brain learns. And while that detailed scrutiny has revealed a lot about the synapses that wire our brains, it isn’t enough. Neuroscientists still lack a complete picture of how the brain learns.

They may have been looking too closely. When it comes to the neuroscience of learning, zeroing in on synapse action misses the forest for the trees.

A new, zoomed-out approach attempts to make sense of the large-scale changes that enable learning. By studying the shifting interactions between many different brain regions over time, scientists are beginning to grasp how the brain takes in new information and holds onto it.
These kinds of studies rely on powerful math. Brain scientists are co-opting approaches developed in other network-based sciences, borrowing tools that reveal in precise, numerical terms the shape and function of the neural pathways that shift as human brains learn.

“When you’re learning, it doesn’t just require a change in activity in a single region,” says Danielle Bassett, a network neuroscientist at the University of Pennsylvania. “It really requires many different regions to be involved.” Her holistic approach asks, “what’s actually happening in your brain while you’re learning?” Bassett is charging ahead to both define this new field of “network neuroscience” and push its boundaries.

“This line of work is very promising,” says neuroscientist Olaf Sporns of Indiana University Bloomington. Bassett’s research, he says, has great potential to bridge gaps between brain-imaging studies and scientists’ understanding of how learning happens. “I think she’s very much on the right track.”
Already, Bassett and others have found tantalizing hints that the brains that learn best have networks that are flexible, able to rejigger connections on the fly to allow new knowledge in. Some brain regions always communicate with the same neural partners, rarely switching to others. But brain regions that exhibit the most flexibility quickly swap who they’re talking with, like a parent who sends a birthday party invite to the preschool e-mail list, then moments later, shoots off a work memo to colleagues.

In a few studies, researchers have witnessed this flexibility in action, watching networks reconfigure as people learn something while inside a brain scanner. Network flexibility may help several types of learning, though too much flexibility may be linked to disorders such as schizophrenia, studies suggest.

Not surprisingly, some researchers are rushing to apply this new information, testing ways to boost brain flexibility for those of us who may be too rigid in our neural connections.

“These are pretty new ideas,” says cognitive neuroscientist Raphael Gerraty of Columbia University. The mathematical and computational tools required for this type of research didn’t exist until recently, he says. So people just weren’t thinking about learning from a large-scale network perspective. “In some ways, it was a pretty boring mathematical, computational roadblock,” Gerraty says. But now the road is clear, opening “this conceptual avenue … that people can now explore.”
It takes a neural village
That conceptual avenue is more of a map, made of countless neural roads. Even when a person learns something very simple, large swaths of the brain jump in to help. Learning an easy sequence of movements, like tapping out a brief tune on a keyboard, prompts activity in the part of the brain that directs finger movements. The action also calls in brain areas involved in vision, decision making, memory and planning. And finger taps are a pretty basic type of learning. In many situations, learning calls up even more brain areas, integrating information from multiple sources, Gerraty says.

He and colleagues caught glimpses of some of these interactions by scanning the brains of people who had learned associations between two faces. Only one of the faces was then paired with a reward. In later experiments, the researchers tested whether people could figure out that the halo of good fortune associated with the one face also extended to the face it had been partnered with earlier. This process, called “transfer of learning,” is something that people do all the time in daily life, such as when you’re wary of the salad at a restaurant that recently served tainted cheese.

Study participants who were good at applying knowledge about one thing — in this case, a face — to a separate thing showed particular brain signatures, Gerraty and colleagues reported in 2014 in the Journal of Neuroscience. Connections between the hippocampus, a brain structure important for memory, and the ventromedial prefrontal cortex, involved in self-control and decision making, were weaker in good learners than in people who struggled to learn. The scans, performed several days after the learning task, revealed inherent differences between brains, the researchers say. The experiment also turned up other neural network differences among these regions and larger-scale networks that span the brain.

Children who have difficulty learning math, when scanned, also show unexpected brain connectivity, according to research by neuroscientist Vinod Menon of Stanford University and colleagues. Compared with kids without disabilities, children with developmental dyscalculia who were scanned while doing math problems had more connections, particularly among regions involved in solving math problems. That overconnectivity, described in 2015 in Developmental Science, was a surprise, Menon says, since earlier work had suggested that these math-related networks were too weak. But it may be that too many links create a system that can’t accommodate new information. “The idea is that if you have a hyperconnected system, it’s not going to be as responsive,” he says.
There’s a balance to be struck, Menon says. Neural pathways that are too weak can’t carry necessary information, and pathways that are too connected won’t allow new information to move in. But the problem isn’t as simple as that. “It’s not that everything is changing everywhere,” he says. “There is a specificity to it.” Some connections are more important than others, depending on the task.

Neural networks need to shuttle information around quickly and fluidly. To really get a sense of this movement as opposed to snapshots frozen in time, scientists need to watch the brain as it learns. “The next stage is to figure out how the networks actually shift,” Menon says. “That’s where the studies from Dani Bassett and others will be very useful.”

Flexing in real time
Bassett and colleagues have captured these changing networks as people learn. Volunteers were given simple sequences to tap out on a keyboard while undergoing a functional MRI scan. During six weeks of scanning as people learned the task, neural networks in their brains shifted around. Some connections grew stronger and some grew weaker, Bassett and her team reported in Nature Neuroscience in 2015.

People who quickly learned to tap the correct sequence of keys showed an interesting neural trait: As they learned, they shed certain connections between their frontal cortex, the outermost layer of the brain toward the front of the head, and the cingulate, which sits toward the middle of the brain. This connection has been implicated in directing attention, setting goals and making plans, skills that may be important for the early stages of learning but not for later stages, Bassett and colleagues suspect. Compared with slow learners, fast learners were more likely to have shunted these connections, a process that may have made their brains more efficient.

Flexibility seems to be important for other kinds of learning too. Reinforcement learning, in which right answers get a thumbs up and wrong answers are called out, also taps into brain flexibility, Gerraty, Bassett and others reported online May 30 at bioRxiv.org. This network comprises many points on the cortex, the brain’s outer layer, and a deeper structure known as the striatum. Other work on language comprehension, published by Bassett and colleagues last year in Cerebral Cortex, found some brain regions that were able to quickly form and break connections.

These studies captured brains in the process of learning, revealing “a much more interesting network structure than what we previously thought when we were only looking at static snapshots,” Gerraty says. The learning brain is incredibly dynamic, he says, with modules breaking off from partners and finding new ones.

While the details of those dynamics differ from study to study, there is an underlying commonality: “It seems that part of learning about the world is having parts of your brain become more flexible, and more able to communicate with different areas,” Gerraty says. In other words, the act of learning takes flexibility.

But too much of a good thing may be bad. While performing a recall task in a scanner, people with schizophrenia had higher flexibility among neural networks across the brain than did healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. “That suggests to me that while flexibility is good for healthy people, there is perhaps such a thing as too much flexibility,” Bassett says.
Just how this flexibility arises, and what controls it, is unknown. Andrea Stocco, a cognitive neuroscientist at the University of Washington in Seattle, suspects that a group of brain structures called the basal ganglia, deep within the brain, has an important role in controlling flexibility. He compares this region, which includes the striatum, to an air traffic controller who shunts information to where it’s most needed. One of the basal ganglia’s jobs seems to be shutting things down. “Most of the time, the basal ganglia is blocking something,” he says. Other researchers have found evidence that crucial “hubs” in the cortex help control flexibility.

Push for more
Researchers don’t yet know how measures of flexibility in brain regions relate to the microscopic changes that accompany learning. For now, the macro and the micro views of learning are separate worlds. Despite that missing middle ground, researchers are charging ahead, looking for signs that neural flexibility might offer a way to boost learning aptitude.

It’s possible that external brain stimulation may enhance flexibility. After receiving brain stimulation carefully aimed at a known memory circuit, people were better able to recall lists of words, scientists reported May 8 in Current Biology. If stimulation can boost memory, some argue, the technique could enhance flexibility and perhaps learning too.
Certain drugs show promise. DXM, found in some cough medicines, blocks proteins that help regulate nerve cell chatter. Compared with a placebo, the compound made some brain regions more flexible and able to rapidly switch partners in healthy people, Bassett and colleagues reported last year in the Proceedings of the National Academy of Sciences. She is also studying whether neurofeedback — a process in which people try to change their brain patterns to become more flexible with real-time monitoring — can help.

Something even simpler might work for boosting flexibility. On March 31 in Scientific Reports, Bassett and colleagues described their network analyses of an unusual subject. For a project called MyConnectome, neuroscientist Russ Poldrack, then at the University of Texas at Austin, had three brain scans a week for a year while assiduously tracking measures that included mood. Bassett and her team applied their mathematical tools to Poldrack’s data to get measurements of his neural flexibility on any given scan day. The team then looked for associations with mood. The standout result: When Poldrack was happiest, his brain was most flexible, for reasons that aren’t yet clear. (Flexibility was lowest when he was surprised.)

Those results are from a single person, so it’s unknown how well they would generalize to others. What’s more, the study identifies only a link, not that happiness causes more flexibility or vice versa. But the idea is intriguing, if not obvious, Bassett says. “Of course, no teacher is really going to say we’re doing rocket science if we tell them we should make the kids happier and then they’ll learn better.” But finding out exactly how happiness relates to learning is important, she says.

The research is just getting started. But already, insights on learning are coming quickly from the small group of researchers viewing the brain as a matrix of nodes and links that deftly shift, swap and rearrange themselves. Zoomed out, network science brings to the brain “a whole new set of hypotheses and new ways of testing them,” Bassett says.

Microbes hobble a widely used chemo drug

Some bacteria may shield tumor cells against a common chemotherapy drug.

Certain types of bacteria make an enzyme that inactivates the drug gemcitabine, researchers report in the Sept. 15 Science. Gemcitabine is used to treat patients with pancreatic, lung, breast and bladder cancers.

Bacteria that produce the enzyme cytidine deaminase converted the drug to an inactive form. That allowed tumor cells to survive gemcitabine treatment in lab dishes and mouse studies, Leore Geller of the Weizmann Institute of Science in Rehovot, Israel, and colleagues discovered. More than 98 percent of the enzyme-producing microbes belong to the Gammaproteobacteria class, which includes E. coli and about 250 bacterial genera.
Pancreatic tumors taken from human patients also carried the enzyme-producing bacteria. Of 113 pancreatic ductal adenocarcinoma samples studied, 86 contained gemcitabine-inactivating bacteria.

Antibiotics may correct the problem. In the study, Geller and colleagues infected mice that had colon cancer with the enzyme-producing bacteria. Tumors grew rapidly in infected mice treated with gemcitabine alone. Giving the mice antibiotics helped gemcitabine kill tumor cells, increasing the number of tumor cells going through a type of cell death called apoptosis from about 15 percent to 60 percent or more. That result may indicate that combinations of gemcitabine and antibiotics could make chemotherapy more effective for some cancer patients.

Body clock mechanics wins U.S. trio the Nobel Prize in physiology or medicine

Discoveries about the clocklike ups and downs of daily life have won Jeffery C. Hall, Michael Rosbash and Michael W. Young the Nobel Prize in physiology or medicine.

Circadian rhythms are daily cycles of hormones, gene activity and other biological processes that govern sleep, body temperature and metabolism. When thrown out of whack, there can be serious health consequences, including increased risk of diabetes, heart and Alzheimer’s diseases.

Hall and Rosbash discovered the first molecular gear of the circadian clockworks: A protein called Period increases and decreases in abundance on a regular cycle during the day. Young discovered that another protein called Timeless works with Period to drive the clock. Young also discovered other circadian clockworks.

Nanoscale glitches let flowers make a blue blur that bees can see

A bit of imperfection could be perfect for flowers creating a “blue halo” effect that bees can see.

At least a dozen families of flowering plants, from hibiscuses to daisy relatives, have a species or more that can create a bluish-ultraviolet tinge using arrays of nanoscale ridges on petals, an international research team reports online October 18 in Nature. These arrays could be the first shown to benefit from the sloppiness of natural fabrication, says coauthor Silvia Vignolini, a physicist specializing in nanoscale optics at the University of Cambridge.
Flowers, of course, can’t reach industrial standards for uniform nanoscale fabrication. Yet the halo may be a case where natural imperfections may be important to a flower’s display. Tests with artificial flowers showed that the nanoglitches made it easier for bees to learn that a showy petal meant a sugary reward, Vignolini and colleagues found.
Blues are rare in actual pigments in living things( SN: 12/10/16, p. 4 ). Color in the wings of Morpho butterflies or blue jay feathers, for instance, comes from nanoscale structures that contain no pigments but create colorful illusions by muting some wavelengths of light while intensely reflecting others ( SN: 6/11/16, p. 32 ).
Flower petals make their blue halo illusion with somewhat irregular versions of what are called diffraction gratings, rows of ridges like the recording surface on a CD. A perfectly regular array of ridges would create true iridescence, changing color depending on the angle a viewer takes. The flowers’ imperfections, variations in ridge height and spacing, weaken or destroy the iridescence. A viewer swooping by would see less color shifting and more of a bluish-ultraviolet tinge reflected at a wider range of angles.

To see whether bees respond more to iridescence or a blue halo, researchers created sets of artificial flowers, pieces of epoxy resin with some kind of nanoscale-ridged array. A petal-scale project was huge compared with the usual nanoscale experiments, requiring marathon fabrication sessions. “We were a pain to everybody,” Vignolini says.

In two tests, researchers offered bumblebees a pair of “flowers,” one that held sugar water and one with a nasty-tasting solution, to see how quickly bees would learn to distinguish sweet from foul. When the flower’s nanoridges had imperfections creating a blue halo, bees learned the task faster than when the flower had perfect iridescence. Imperfect arrays were actually an advantage for the flowers in creating displays pollinating bees find memorable, the researchers conclude.
Such disorder in nature’s structural color (versus pigments) has shown up before, as in obviously jumbled color-trick structures in bird feathers. Before the tests, though, it was unclear whether flowers would benefit from perfect iridescence and were just falling short in growing perfect arrays. The blue halo might have been merely a side effect of challenging botanical fabrication. The bee experiments, however, showed the opposite, the researchers say. These are the first tests to show that some disorder is not just a downside of natural fabrication but in itself “has a function,” Vignolini says.

That result makes sense to visual ecologist Nathan Morehouse of the University of Cincinnati. Nanostructures that iridesce may often just be a way birds or butterflies can create an unusual color rather than a way to produce iridescence for its own sake. The shifting colors might even have a downside. By definition, true iridescence changes color as an insect or bird changes its angle of approach, and so may not be the best form for an easy-to-remember signal. “Iridescence itself is something they just have to manage,” he suggests.

Alligators eat sharks — and a whole lot more

Alligators don’t just stick to freshwater and the prey they find there. These crafty reptiles can live quite easily, at least for a bit, in salty waters and find plenty to eat — including crabs, sea turtles and even sharks.

“They should change the textbooks,” says James Nifong, an ecologist with the Kansas Cooperative Fish and Wildlife Research Unit at Kansas State University in Manhattan, who has spent years documenting the estuarine gator diet.

Nifong’s most recent discovery, splashed all over the news last month, is that the American alligator (Alligator mississippiensis) eats at least three species of shark and two species of rays, he and wildlife biologist Russell Lowers report in the September Southeastern Naturalist.

Lowers captured a female gator with a young Atlantic stingray in her jaws near where he works at Kennedy Space Center in Cape Canaveral, Florida. And he and Nifong gathered several other eyewitness accounts: A U.S. Fish and Wildlife employee spotted a gator consuming a nurse shark in a Florida mangrove swamp in 2003. A birder photographed an alligator eating a bonnethead shark in a Florida salt marsh in 2006. One of Nifong’s collaborators, a marine turtle researcher, saw gators consuming both bonnethead and lemon sharks in the late 1990s. And Nifong found yet another report of a gator eating a bonnethead shark in Hilton Head, S.C., after their paper was published. All of these snacks required gators to venture into salty waters.
But shark may not be the most surprising item on the alligator estuarine menu. Nifong spent years catching hundreds of wild gators and pumping their stomachs to figure out what they eat, work that relies “on electrical tape, duct tape and zip ties,” Nifong says. And he found that the menu is pretty long.

To snag an alligator, he uses a big blunted hook or, with smaller animals, just grabs the animal and hauls it into the boat. He gets a noose around its neck. Then the researchers tape the mouth shut, take body measurements (everything from weight to toe length) and get blood or urine samples.

Once that’s out of the way, the team will strap the gator to a board with Velcro ties or rope. Then, it’s time to untape the mouth, quickly insert a piece of pipe to hold it open, and tape the alligator’s mouth around the pipe. The pipe, Nifong says, is there “so they can’t bite down.” And that’s important, because next someone has to stick a tube down the gator’s throat and hold it there to keep the animal’s throat open.
Finally, “we fill [the stomach] up with water very slowly so we don’t injure the animal,” Nifong says. “Then we do basically the Heimlich maneuver.” Pressing down on the abdomen forces the gator to give up its stomach contents. Usually.
“Sometimes it goes better than other times,” he says. “They can just decide to not let it out.” Then the researchers carefully undo all their work to let the gator loose.

Back in the lab, Nifong and his colleagues teased out what they could find in those stomach contents, and looked for more clues about the animals’ diet from in the blood samples. Nifong and his colleagues found that the gators were eating a rich marine diet, including small fish, mammals, birds, insects and crustaceans. They’ll even eat fruit and seeds. The sharks and rays didn’t show up in these studies (nor did sea turtles, which gators have also been spotted munching on). But Nifong and Lowers speculate that’s because the tissue of those animals gets digested very quickly. So if a gator had eaten a shark more than a few days before being caught, there was no way to know.

Because alligators don’t have any salt glands, “they’re subject to the same pressures as me or you when being out in saltwater,” Nifong says. “You’re losing water, and you’re increasing salt in your blood system.” That can lead to stress and even death, he notes. So the gators tend to just go back and forth between saltwater and freshwater. They can also close off their throat with a cartilaginous shield and shut their nostrils to keep salty water out. And when they eat, they’ll tip their head up to let the saltwater drain out before gulping down their catch.
What alligators eat isn’t as important a finding as the discovery that they regularly travel between saltwater and freshwater environments, Nifong says. And, he notes, “it occurs across a wide variety of habitats across the U.S. southeast.” That’s important because the gators are moving nutrients from rich marine waters into poorer, fresh waters. And they may be having a larger effect on estuarine food webs that anyone had imagined.

For instance, one of the prey items on the alligator menu is blue crab. Gators “scare the bejesus out of them,” Nifong says. And when gators are around, blue crabs decrease their predation of snails, which might then eat more of the cordgrass that forms the base of the local ecosystem. “Understanding that an alligator has a role in that kind of interaction,” Nifong points out, is important when planning conservation efforts.