#sci_evergreen1
Explore tagged Tumblr posts
sciencespies · 2 years ago
Text
Tracking air pollution disparities -- daily -- from space
https://sciencespies.com/environment/tracking-air-pollution-disparities-daily-from-space/
Tracking air pollution disparities -- daily -- from space
Studies have shown that pollution, whether from factories or traffic-snarled roads, disproportionately affects communities where economically disadvantaged people and Hispanic, Black and Asian people live. As technology has improved, scientists have begun documenting these disparities in detail, but information on daily variations has been lacking. Today, scientists report preliminary work calculating how inequities in exposure fluctuate from day to day across 11 major U.S. cities. In addition, they show that in some places, climate change could exacerbate these differences.
The researchers will present their results at the fall meeting of the American Chemical Society (ACS).
Air pollution levels can vary significantly across relatively short distances, dropping off a few hundred yards from a freeway, for example. Researchers, including Sally Pusede, Ph.D., have used satellite and other observations to determine how air quality varies on a small geographic scale, at the level of neighborhoods.
But this approach overlooks another crucial variable. “When we regulate air pollution, we don’t think of it as remaining constant over time, we think of it as dynamic,” says Pusede, the project’s principal investigator. “Our new work takes a step forward by looking at how these levels vary from day to day,” she says.
Information about these fluctuations can help pinpoint sources of pollution. For instance, in research reported last year, Pusede and colleagues at the University of Virginia found that disparities in air quality across major U.S. cities decreased on weekends. Their analysis tied this drop to the reduction of deliveries by diesel-fueled trucks. On weekends, more than half of such trucks are parked.
Pusede’s research focuses on the gas NO2, which is a component of the complex brew of potentially harmful compounds produced by combustion. To get a sense of air pollution levels, scientists often look to NO2. But it’s not just a proxy — exposure to high concentrations of this gas can irritate the airways and aggravate pulmonary conditions. Inhaling elevated levels of NO2 over the long term can also contribute to the development of asthma.
The team has been using data on NO2 collected almost daily by a space-based instrument known as TROPOMI, which they confirmed with higher resolution measurements made from a similar sensor on board an airplane flown as part of NASA’s LISTOS project. They analyzed these data across small geographic regions, called census tracts, that are defined by the U.S. Census Bureau. In a proof-of-concept project, they used this approach to analyze initial disparities in Houston, and later applied these data-gathering methods to study daily disparities over New York City and Newark, New Jersey.
Now, they have analyzed satellite-based data for 11 additional cities, aside from New York City and Newark, for daily variations. The cities are: Atlanta, Baltimore, Chicago, Denver, Houston, Kansas City, Los Angeles, Phoenix, Seattle, St. Louis and Washington, D.C. A preliminary analysis found the highest average disparity in Los Angeles for Black, Hispanic and Asian communities in the lowest socioeconomic status (SES) tracts. They experienced an average of 38% higher levels of pollution than their non-Hispanic white, higher SES counterparts in the same city — although disparities on some days were much higher. Washington, D.C., had the lowest disparity, with an average of 10% higher levels in Black, Hispanic and Asian communities in low-income tracts.
In these cities, as in New York City and Newark, the researchers also analyzed the data to see whether they could identify any links with wind and heat — both factors that are expected to change as the world warms. Although the analysis is not yet complete, the team has so far found a direct connection between stagnant air and uneven pollution distribution, which was not surprising to the team because winds disperse pollution. Because air stagnation is expected to increase in the northeastern and southwestern U.S. in the coming years, this result suggests uneven air pollution distribution could worsen in these regions, too, if actions to reduce emissions are not taken. The team found a less robust connection with heat, though a correlation existed. Hot days are expected to increase across the country with climate change. Thus, the researchers say that if greenhouse gas emissions aren’t reduced soon, people in these communities could face more days in which conditions are hazardous to their health from the combination of NO2 and heat impacts.
Pusede hopes to see this type of analysis used to support communities fighting to improve air quality. “Because we can get daily data on pollutant levels, it’s possible to evaluate the success of interventions, such as rerouting diesel trucks or adding emissions controls on industrial facilities, to reduce them,” she says.
The researchers acknowledge support and funding from NASA and the National Science Foundation.
Video: https://youtu.be/SbQ87rZq9MA
#Environment
8 notes · View notes
sciencespies · 2 years ago
Text
Quantum magnet is billions of times colder than interstellar space
https://sciencespies.com/physics/quantum-magnet-is-billions-of-times-colder-than-interstellar-space/
Quantum magnet is billions of times colder than interstellar space
A magnet made out of ytterbium atoms is only a billionth of a degree warmer than absolute zero. Understanding how it works could help physicists build high temperature superconductors
Physics 1 September 2022
By Karmela Padavic-Callaghan
Ytterbium atoms have been used to make a very cold magnet
Carlos Clarivan/Science Photo Library
A new kind of quantum magnet is made out of atoms only a billionth of a degree warmer than absolute zero – and physicists are not sure how it behaves.
Regular magnets repel or attract magnetic objects depending on whether electrons inside the magnet are in an “up” or a “down” quantum spin state, a property analogous to saying where their north and south poles would be if the particles were tiny bar magnets. However, this isn’t the only property that can be used to build a magnet.
Kaden Hazzard at Rice University in Texas and his colleagues used ytterbium atoms to make a magnet based on a spin-like property that has six options each labelled with a colour.
Advertisement
The researchers confined the atoms in a vacuum in a small glass and metal box then used laser beams to cool them down. The push from the laser beam made the most energetic atoms release some energy, which lowers the overall temperature, similar to blowing on a cup of tea.
They also used lasers to arrange the atoms in different configurations to produce magnets. Some were one-dimensional like a wire, others were two-dimensional like a thin sheet of a material or three-dimensional like a piece of a crystal.
The atoms arranged in lines and sheets reached about 1.2 nanokelvin, more than 2 billion times colder than interstellar space. For the atoms in three-dimensional arrangements, the situation is so complex the researchers are still figuring out the best way to measure the temperature.
“Our colleagues achieved the coldest fermions in the universe. Thinking about experimenting on this ten years ago, it looked like a theorist’s dream,” says Hazzard.
Physicists have long been interested in how atoms interact in exotic magnets like this because they suspect that similar interactions happen in high temperature superconductors – materials that perfectly conduct electricity. By better understanding what happens, they could build better superconductors.
There have been theoretical calculations about such magnets but they have failed to predict exact colour state patterns or how magnetic exactly they can be, says co-author Eduardo Ibarra-García-Padilla. He says that he and colleagues carried out some of the best calculations yet while they were analysing the experiment, but could still only predicted the colours of eight atoms at a time in the line and sheet configurations out of the thousands of atoms in the experiment.
Victor Gurarie at the University of Colorado Boulder says that the experiment was just cold enough for atoms to start “paying attention” to the quantum colour states of their neighbours, a property that does not influence how they interact when warm. Because computations are so difficult, similar future experiments may be the only method for studying these quantum magnets, he says.
Reference: Nature Physics, DOI: 10.1038/s41567-022-01725-6
More on these topics:
#Physics
7 notes · View notes
sciencespies · 2 years ago
Text
Ocean cooling over millennia led to larger fish
https://sciencespies.com/nature/ocean-cooling-over-millennia-led-to-larger-fish/
Ocean cooling over millennia led to larger fish
Earth’s geological history is characterized by many dynamic climate shifts that are often associated with large changes in temperature. These environmental shifts can lead to trait changes, such as body size, that can be directly observed using the fossil record.
To investigate whether temperature shifts that occurred before direct measurements were recorded, called paleoclimatology, are correlated with body size changes, several members of the University of Oklahoma’s Fish Evolution Lab decided to test their hypothesis using tetraodontiform fishes as a model group. Tetradontiform fishes are primarily tropical marine fishes, and include pufferfish, boxfishes and filefish, among others.
The study was led by Dahiana Arcila, assistant professor of biology and assistant curator at the Sam Noble Museum of Natural History, with Ricardo Betancur, assistant professor of biology, along with biology graduate student Emily Troyer, and involved collaborators from the Smithsonian Institution, University of Chicago, and George Washington University in the United States, as well as University of Turin in Italy, University of Lyon in France, and CSIRO Australia.
The researchers discovered that the body sizes of these fishes have grown larger over the past hundred million years in conjunction with the gradual cooling of ocean temperatures.
Their finding adheres to two well-known rules of evolutionary trends, Cope’s rule which states that organismal body sizes tend to increase over evolutionary time, and Bergmann’s rule which states that species reach larger sizes in cooler environments and smaller sizes in warmer environments. What was less understood, however, was how these rules relate to ectotherms, organisms that can’t regulate their internal body temperatures and are dependent on their external or environmental climates.
“Cope’s and Bergmann’s rules are fairly well-supported for endotherms, or warm-blooded species, such as birds and mammals,” Troyer said. “However, among ectothermic species, especially vertebrates, these rules tend to have mixed findings.”
A challenge of studying ancient fish is that there are very few fossil records. To supplement that missing information, the researchers combined genomic data of living fish with fossil data.
“When you look across different groups in the tree of life, then you will notice that there are a limited number of groups that actually have a good fossil record, but the larger marine fish group (known as Tetraodontiformes)that includes the popular pufferfish, ocean sunfish and boxfish, is remarkable in that it has a spectacular paleontological record,” Arcila said. “So, by integrating those two fields, genomics and paleontology, then we’re actually able to bring into the picture new results that you won’t be able to obtain using just one data type.”
The genomic and fossil data was then combined with data on ocean temperatures, that demonstrated that the gradual climate cooling over the past 100 million years is associated with increased body size of tetraodontiform fishes.
“Based on fossil data, we’re showing that these fish started very small, but you can see that living species are much larger, and those changes are associated with the cooling temperature of the ocean over this very long period of time,” Arcila said.
While the evolution of tetraodontiform fishes appears to conform to Cope’s and Bergmann’s hypotheses, the authors add a caveat that many more factors could play a role in fish body size evolution.
“It’s really exciting to see support for these two biological rules in Tetraodontiformes, as these trends are less studied among marine fishes compared with terrestrial species,” Troyer said. “Undoubtedly we will discover more about their body size evolution in the future.”
#Nature
6 notes · View notes
sciencespies · 2 years ago
Text
30-million-year-old amphibious beaver fossil is oldest ever found
https://sciencespies.com/nature/30-million-year-old-amphibious-beaver-fossil-is-oldest-ever-found/
30-million-year-old amphibious beaver fossil is oldest ever found
A new analysis of a beaver anklebone fossil found in Montana suggests the evolution of semi-aquatic beavers may have occurred at least 7 million years earlier than previously thought, and happened in North America rather than Eurasia.
In the study, Ohio State University evolutionary biologist Jonathan Calede describes the find as the oldest known amphibious beaver in the world and the oldest amphibious rodent in North America. He named the newly discovered species Microtheriomys articulaquaticus.
Calede’s findings resulted from comparing measurements of the new species’ anklebone to about 340 other rodent specimens to categorize how it moved around in its environment — which indicated this animal was a swimmer. The Montana-based bone was determined to be 30 million years old — the oldest previously identified semi-aquatic beaver lived in France 23 million years ago.
Beavers and other rodents can tell us a lot about mammalian evolution, said Calede, an assistant professor of evolution, ecology and organismal biology at Ohio State’s Marion campus.
“Look at the diversity of life around us today, and you see gliding rodents like flying squirrels, rodents that hop like the kangaroo rat, aquatic species like muskrats, and burrowing animals like pocket gophers. There is an incredible diversity of shapes and ecologies. When that diversity arose is an important question,” Calede said. “Rodents are the most diverse group of mammals on Earth, and about 4 in 10 species of mammals are rodents. If we want to understand how we get incredible biodiversity, rodents are a great system to study.”
The research is published online today (Aug. 24, 2022) in the journal Royal Society Open Science.
advertisement
The scientists, including Calede, who found the bones and teeth of the new beaver species in western Montana knew they came from beavers right away because of their recognizable teeth. But the discovery of an anklebone, about 10 millimeters long, opened up the possibility of learning much more about the animal’s life. The astragalus bone in beavers is the equivalent to the talus in humans, located where the shin meets the top of the foot.
Calede took 15 measurements of the anklebone fossil and compared it to measurements — over 5,100 in all — of similar bones from 343 specimens of rodent species living today that burrow, glide, jump and swim as well as ancient beaver relatives.
Running computational analyses of the data in multiple ways, he arrived at a new hypothesis for the evolution of amphibious beavers, proposing that they started to swim as a result of exaptation — the co-opting of an existing anatomy — leading, in this case, to a new lifestyle.
“In this case, the adaptations to burrowing were co-opted to transition to a semi-aquatic locomotion,” he said. “The ancestor of all beavers that have ever existed was most likely a burrower, and the semi-aquatic behavior of modern beavers evolved from a burrowing ecology. Beavers went from digging burrows to swimming in water.
“It’s not necessarily surprising because movement through dirt or water requires similar adaptations in skeletons and muscles.”
Fossils of fish and frogs and the nature of the rocks where Microtheriomys articulaquaticus fossils were found suggested it had been an aquatic environment, providing additional evidence to support the hypothesis, Calede said.
advertisement
Fossils are usually dated based on their location between layers of rocks whose age is determined by the detection of the radioactive decay of elements left behind by volcanic activity. But in this case, Calede was able to age the specimen at a precise 29.92 million years old because of its location within, rather than above or below, a layer of ashes.
“The oldest semi-aquatic beaver we knew of in North America before this was 17 or 18 million years old,” he said. “And the oldest aquatic beaver in the world, before this one, was from France and is about 23 million years old.
“I’m not claiming this new species is necessarily the oldest aquatic beaver ever, because there are other animals that we know, from their teeth, that are related to this species I described.”
Microtheriomys articulaquaticus did not have the flat tail that helps beavers swim today. It likely ate plants instead of wood and was comparably small — weighing less than 2 pounds. The modern adult beaver, weighing 50 pounds or so, is the second-largest living rodent after the capybara from South America.
Calede’s analysis of beaver body size over the past 34 million years suggests beaver evolution adheres to what is known as Cope’s Rule, which posits that organisms in evolving lineages increase in size over time. A giant beaver the size of a black bear lived in North America as recently as about 12,000 years ago. Like all but the two beaver species living today, Castor canadensis and Castor fiber, the giant beaver is extinct.
“It looks like when you follow Cope’s Rule, it’s not good for you — it sets you on a bad path in terms of species diversity,” Calede said. “We used to have dozens of species of beavers in the fossil record. Today we have one North American beaver and one Eurasian beaver. We’ve gone from a group that is super diverse and doing so well to one that is obviously not so diverse anymore.”
This work was funded by the American Philosophical Society Lewis and Clark Fund, Sigma-Xi, the Geological Society of America, the Evolving Earth Foundation, the Northwest Association, the Paleontological Society, the Tobacco Root Geological Society, the UWBM, the University of Washington Department of Biology and Ohio State.
#Nature
5 notes · View notes
sciencespies · 2 years ago
Text
Oyster reef habitats disappear as Florida becomes more tropical
https://sciencespies.com/nature/oyster-reef-habitats-disappear-as-florida-becomes-more-tropical/
Oyster reef habitats disappear as Florida becomes more tropical
With temperatures rising globally, cold weather extremes and freezes in Florida are diminishing — an indicator that Florida’s climate is shifting from subtropical to tropical. Tropicalization has had a cascading effect on Florida ecosystems. In Tampa Bay and along the Gulf Coast, University of South Florida researchers found evidence of homogenization of estuarine ecosystems.
While conducting fieldwork in Tampa Bay, lead author Stephen Hesterberg, a recent graduate of USF’s integrative biology doctoral program, noticed mangroves were overtaking most oyster reefs — a change that threatens species dependent on oyster reef habitats. That includes the American oystercatcher, a bird that the Florida Fish and Wildlife Conservation Commission has already classified as “threatened.”
Working alongside doctoral student Kendal Jackson and Susan Bell, distinguished university professor of integrative biology, Hesterberg explored how many mangrove islands were previously oyster reefs and the cause of the habitat conversion.
The interdisciplinary USF team found the decrease in freezes allowed mangrove islands to replace the previously dominant salt marsh vegetation. For centuries in Tampa Bay, remnant shorelines and shallow coastal waters supported typical subtropical marine habitats, such as salt marshes, seagrass beds, oyster reefs and mud flats. When mangroves along the shoreline replaced the salt marsh vegetation, they abruptly took over oyster reef habitats that existed for centuries.
“Rapid global change is now a constant, but the extent to which ecosystems will change and what exactly the future will look like in a warmer world is still unclear,” Hesterberg said. “Our research gives a glimpse of what our subtropical estuaries might look like as they become increasingly ‘tropical’ with climate change.”
The study, published in the Proceedings of the National Academy of Sciences, shows how climate-driven changes in one ecosystem can lead to shifts in another.
advertisement
Using aerial images from 1938 to 2020, the team found 83% of tracked oyster reefs in Tampa Bay fully converted to mangrove islands and the rate of conversion accelerated throughout the 20th century. After 1986, Tampa Bay experienced a noticeable decrease in freezes — a factor that previously would kill mangroves naturally.
“As we change our climate, we see evidence of tropicalization — areas that once had temperate types of organisms and environments are becoming more tropical in nature,” Bell said. She said this study provides a unique opportunity to examine changes in adjacent coastal ecosystems and generate predictions of future oyster reef conversions.
While the transition to mangrove islands is well-advanced in the Tampa Bay estuary and estuaries to the south, Bell said Florida ecosystem managers in northern coastal settings will face tropicalization within decades.
“The outcome from this study poses an interesting predicament for coastal managers, as both oyster reefs and mangrove habitats are considered important foundation species in estuaries,” Bell said.
Oyster reefs improve water quality and simultaneously provide coastal protection by reducing the impact of waves. Although mangroves also provide benefits, such as habitat for birds and carbon sequestration, other ecosystem functions unique to oyster reefs will diminish or be lost altogether as reefs transition to mangrove islands. Loss of oyster reef habitats will directly threaten wild oyster fisheries and reef-dependent species.
Although tropicalization will make it increasingly difficult to maintain oyster reefs, human intervention through reef restoration or active removal of mangrove seedlings could slow or prevent homogenization of subtropical landscapes — allowing both oyster reefs and mangrove tidal wetlands to co-exist.
Hesterberg plans to continue examining the implications of such habitat transition on shellfisheries in his new role as executive director of the Gulf Shellfish Institute, a non-profit scientific research organization. He is expanding his research to investigate how to design oyster reef restoration that will prolong ecosystem lifespan or avoid mangrove conversion altogether.
Story Source:
Materials provided by University of South Florida. Note: Content may be edited for style and length.
#Nature
4 notes · View notes
sciencespies · 2 years ago
Text
Archaeology and ecology combined sketch a fuller picture of past human-nature relationships
https://sciencespies.com/environment/archaeology-and-ecology-combined-sketch-a-fuller-picture-of-past-human-nature-relationships/
Archaeology and ecology combined sketch a fuller picture of past human-nature relationships
For decades now, archaeologists wielded the tools of their trade to unearth clues about past peoples, while ecologists have sought to understand current ecosystems. But these well-established scientific disciplines tend to neglect the important question of how humans and nature interacted and shaped each other across different places and through time. An emerging field called archaeoecology can fill that knowledge gap and offer insights into how to solve today’s sustainability challenges, but first, it must be clearly defined. A new paper by Santa Fe Institute Complexity Fellow Stefani Crabtree and Jennifer Dunne, SFI’s Vice President for Science, lays out the first comprehensive definition of archaeoecology and calls for more research in this nascent but important field.
While an archaeology or palaeobiology study might examine a particular relationship, such as how humans in New Guinea raised cassowaries during the Late Pleistocene, archaeoecology takes a much broader view. “It’s about understanding the whole ecological context, rather than focusing on one or two species,” Dunne explains.
Crabtree hatched the idea for the paper in March 2020 after isolating in her father’s basement in Oregon as COVID spread across the U.S. She and Dunne, who had both worked on projects about the roles of humans in ancient food webs, realized that work didn’t fit readily in either archaeology or ecology. At the time, there was no notion in the scientific community of an area of research that deeply integrated those two disciplines. Crabtree, an archaeologist, and Dunne, an ecologist, saw an opportunity to define archaeoecology, including the role it can play in addressing the myriad challenges of the Anthropocene.
Archaeoecology, they explain in the paper, examines the past ~60,000 years of interplay between humans and ecosystems. It aims to show not only how humans impact nature, but also how the ecosystems they lived within shaped human culture and dynamics. To achieve this, archaeoecology weaves together data, questions, strategies, and modeling tools from archaeology, ecology, and palaeoecology.
“What it’s doing is breaking down a traditional, but unnecessary, disciplinary separation between archaeology and ecology,” Dunne says.
Crabtree hopes the paper will encourage more scientists to pursue research in the emerging field. And with humanity facing the twin crises of climate change and biodiversity loss, archaeoecology could yield crucial insights that help us navigate our present-day environmental challenges, she says. For instance, as climate change causes Utah’s Great Salt Lake to dry up, we don’t know exactly how this will impact the larger ecosystem. However, we can look to the past for warnings about what might be in store: Through an archaeoecological lens of the Aral Sea during the height of the Silk Road, we can see more clearly how the Soviet Union’s 1960s water diversion project and the subsequent desiccation of the sea impacted the surrounding ecosystems and human communities. Similarly, an archaeological lens documents the stabilizing role that Martu Aboriginal people had on Australia’s Western Desert and the massive biodiversity loss that resulted when the people were removed from the land.
“Every ecosystem on the planet is impacted by humans in one way or another,” Crabtree says. “It’s naïve to look at just the last 100 years because people have been impacting ecosystems everywhere for many thousands of years. We need to understand the past to understand our present and future. Archaeoecology helps with that. We can learn from these experiments with sustainability in the past.”
Story Source:
Materials provided by Santa Fe Institute. Note: Content may be edited for style and length.
#Environment
5 notes · View notes
sciencespies · 2 years ago
Text
Botany: From the soil to the sky
https://sciencespies.com/nature/botany-from-the-soil-to-the-sky/
Botany: From the soil to the sky
Tumblr media
Every day, about one quadrillion gallons of water are silently pumped from the ground to the treetops. Earth’s plant life accomplishes this staggering feat using only sunlight. It takes energy to lift all this liquid, but just how much was an open question until this year.
Researchers at UC Santa Barbara have calculated the tremendous amount of power used by plants to move water through their xylem from the soil to their leaves. They found that, on average, it was an additional 14% of the energy the plants harvested through photosynthesis. On a global scale, this is comparable to the production of all of humanity’s hydropower. Their study, published in the Journal of Geophysical Research: Biogeosciences, is the first to estimate how much energy goes into lifting water up to plant canopies, both for individual plants and worldwide.
“It takes power to move water up through the xylem of the tree. It takes energy. We’re quantifying how much energy that is,” said first author Gregory Quetin, a postdoctoral researcher in the Department of Geography. This energy is in addition to what a plant produces via photosynthesis. “It’s energy that’s being harvested passively from the environment, just through the tree’s structure.”
Photosynthesis requires carbon dioxide, light and water. CO2 is widely available in the air, but the other two ingredients pose a challenge: Light comes from above, and water from below. So, plants need to bring the water up (sometimes a considerable distance) to where the light is.
More complex plants accomplish this with a vascular system, in which tubes called xylem bring water from the roots to the leaves, while other tubes called phloem move sugar produced in the leaves down to the rest of the plant. “Vascular plants evolving xylem is a huge deal that allowed for trees to exist,” Quetin said.
Many animals also have a vascular system. We evolved a closed circulatory system with a heart that pumps blood through arteries, capillaries and veins to deliver oxygen and nutrients around our bodies. “This is a function that many organisms pay a lot for,” said co-author Anna Trugman, an assistant professor in the Department of Geography. “We pay for it because we have to keep our hearts beating, and that’s probably a lot of our metabolic energy.”
Plants could have evolved hearts, too. But they didn’t. And it saves them a lot of metabolic energy.
advertisement
In contrast to animals, plant circulatory systems are open and powered passively. Sunlight evaporates water, which escapes from pores in the leaves. This creates a negative pressure that pulls up the water beneath it. Scientists call this process “transpiration.”
In essence, transpiration is merely another way that plants harvest energy from sunlight. It’s just that, unlike in photosynthesis, this energy doesn’t need to be processed before it can be put to use.
Scientists understand this process fairly well, but no one had ever estimated how much energy it consumes. “I’ve only seen it mentioned specifically as energy in one paper,” co-author Leander Anderegg said, “and it was to say that ‘this is a really large number. If plants had to pay for it with their metabolism, they wouldn’t work.'”
This particular study grew out of basic curiosity. “When Greg [Quetin]and I were both graduate students, we were reading a lot about plant transpiration,” recalled Anderegg, now an assistant professor in the Department of Ecology, Evolution, and Marine Biology. “At some point Greg asked, ‘How much work do plants do just lifting water against gravity?’
“I said, ‘I have no idea. I wonder if anyone knows?’ And Greg said, ‘surely we can calculate that.'”
About a decade later, they circled back and did just that. The team combined a global database of plant conductance with mathematical models of sap ascent to estimate how much power the world’s plant life devotes to pumping water. They found that the Earth’s forests consume around 9.4 petawatt-hours per year. That’s on par with global hydropower production, they quickly point out.
advertisement
This is about 14.2% of the energy that plants take in through photosynthesis. So it’s a significant chunk of energy that plants benefit from but don’t have to actively process. This free energy passes to the animals and fungi that consume plants, and the animals that consume them, and so on.
Surprisingly, the researchers discovered that fighting gravity accounts for only a tiny fraction of this total. Most of the energy goes into simply overcoming the resistance of a plant’s own stem.
These findings may not have many immediate applications, but they help us better understand life on Earth. “The fact that there’s a global energy stream of this magnitude that we didn’t have quantified, is mildly jarring,” Quetin said. “It does seem like a concept that slipped through the cracks.”
The energies involved in transpiration seem to fall in between the scales that different scientists examine. It’s too big for plant physiologists to consider and too small for scientists who study Earth systems to bother with, so it was forgotten. And it’s only within the past decade that scientists have collected enough data on water use and xylem resistance to begin addressing the energy of transpiration at global scales, the authors explained.
Within that time, scientists have been able to refine the significance of transpiration in Earth systems using new observations and models. It affects temperatures, air currents and rainfall, and helps shape a region’s ecology and biodiversity. Sap ascent power is a small component of transpiration overall, but the authors suspect it may turn out to be noteworthy given the significant energy involved.
It’s still early days, and the team admits there’s a lot of work to do in tightening their estimates. Plants vary widely in how conductive their stems are to water flow. Compare a hardy desert juniper with a riverside cottonwood, for instance. “A juniper tree that is very drought adapted has a very high resistance,” Anderegg said, “while cottonwoods just live to pump water.”
This uncertainty is reflected in the authors’ estimates, which fall between 7.4 and 15.4 petawatt-hours per year. That said, it could be as high as 140 petawatt-hours per year, though Quetin admits this upper bound is unlikely. “I think this uncertainty highlights that there is still a lot we don’t know about the biogeography of plant resistance (and to a lesser extent, transpiration),” he said. “This is good motivation for continued research in these areas.”
#Nature
3 notes · View notes
sciencespies · 2 years ago
Text
Researchers model benefits of riverfront forest restoration
https://sciencespies.com/nature/researchers-model-benefits-of-riverfront-forest-restoration/
Researchers model benefits of riverfront forest restoration
Tumblr media
A new Stanford University-led study in Costa Rica reveals that restoring relatively narrow strips of riverfront forests could substantially improve regional water quality and carbon storage. The analysis, available online and set to be published in the October issue of Ecosystem Services, shows that such buffers tend to be most beneficial in steep, erosion-prone, and intensively fertilized landscapes — a finding that could inform similar efforts in other countries.
“Forests around rivers are key places to target for restoration because they provide huge benefits with very little impediment to productive land,” said study lead author Kelley Langhans, a PhD student in biology at Stanford University affiliated with the Natural Capital Project. “A small investment could have a really big impact on the health of people and ecosystems.”
Unleashing potential
Vegetated areas adjacent to rivers and streams absorb harmful pollutants in runoff, keeping them out of waterways. Creating effective policies to safeguard these riparian buffers and prioritizing where to implement them is a challenge in part because of a lack of data quantifying the impact of restoring such areas. The researchers, in partnership with officials from Costa Rica’s Ministry of Environment and Energy, Central Bank and PRIAS Laboratory, analyzed one such policy — Costa Rica’s Forest Law 7575. Passed in 1996 and unevenly enforced since then, the law mandates protection of forested riverfront strips 10 meters (about 33 feet) to 50 meters (about 164 feet) wide.
Using InVEST, the Natural Capital Project’s free, open-source software, the team compared a scenario in which the law was fully enforced with a business-as-usual scenario. They modeled the effects of reforesting 10-meter-wide strips, thereby underestimating the effects of the law’s provisions. Still, their models showed such a change would boost retention of phosphorus by nearly 86%, retention of nitrogen by more than 81%, and retention of sediment by about 4%. The expanded forest cover — an increase of about 2% nationwide — would also increase carbon sequestration by 1.4%.
This reforestation would be most impactful in areas below steep slopes with erosion-prone land uses (such as pastures), high levels of fertilizer application (such as widely cultivated oil palm trees), and low levels of nutrient retention (such as urban areas). Such changes could have huge impacts on areas of Costa Rica where large numbers of people are directly dependent on rivers for drinking water.
“When quantifying the benefits of ecosystem restoration, it’s crucial to consider how it affects people, especially the most vulnerable populations,” said Langhans. “That is why in this research we explicitly mapped out how increases in water quality would reach those who rely on rivers the most.”
Even regions with water treatment infrastructure could benefit because such infrastructure is particularly vulnerable to hurricanes and earthquakes in Costa Rica. As recently as 2020, a tropical storm combined with a hurricane knocked out water service to 120,000 Costa Ricans for several days, forcing people to temporarily rely on other water sources, including streams. Typical water treatment methods also do not remove nitrates, which are especially susceptible to leaching into groundwater due to their high solubility. This is a particular concern in Costa Rica, which uses nitrogen-based fertilizers at one of the highest rates in the world.
Most of the land that would need to be reforested to create these buffers is farmland and pasture for cattle. Past research has shown that Costa Rican farmers value trees on their land and are generally supportive of reforestation, but feel that the upfront costs of transitioning to forest, and — on more productive lands — the opportunity costs of forgoing agricultural production, are too high. Improved financial incentives — such as expanding Costa Rica’s Payments for Ecosystem Services program — and community-based efforts could help, according to the researchers.
The study comes at a key time for Costa Rica, which is implementing a National Decarbonization Plan aimed at increasing forest cover to 60%.
“Our study provides a model for using realistic, policy-based scenarios to pinpoint areas where forest restoration could have the largest impact in terms of improving people’s health and meeting national adaptation and emissions goals,” study coauthor Rafael Monge Vargas, director of Costa Rica’s National Geoenvironmental Information Center in the Ministry of Environment and Energy.
Story Source:
Materials provided by Stanford University. Original written by Rob Jordan. Note: Content may be edited for style and length.
#Nature
3 notes · View notes
sciencespies · 2 years ago
Text
Strange hexagonal diamonds found in meteorite from another planet
https://sciencespies.com/space/strange-hexagonal-diamonds-found-in-meteorite-from-another-planet/
Strange hexagonal diamonds found in meteorite from another planet
Diamonds found in four meteorites in north-west Africa probably came from an ancient dwarf planet, and they are expected to be harder than Earth diamonds
Space 12 September 2022
By Alice Klein
Electron microscopy has revealed hexagonal diamonds (the dark area near the middle of the picture) in meteorites found in Africa
Alan Salek/RMIT
Mysterious hexagonal diamonds that don’t occur naturally on Earth have been discovered in four meteorites in north-west Africa.
“It’s really exciting because there were some people in the field who doubted whether this material even existed,” says Alan Salek at RMIT University in Melbourne, Australia, who was part of the team that found them.
Hexagonal diamonds, like regular diamonds, are made of carbon, but their atoms are arranged in a hexagonal structure rather than a cubic one.
Advertisement
Also known as lonsdaleite, hexagonal diamonds were first reported in meteorites in the US and India in the 1960s. However, the previously discovered crystals were so small – only nanometres in size – that it was hard to confirm whether they were truly hexagonal diamonds.
To hunt for larger crystals, Salek and his colleagues used a powerful electron microscope to peer into 18 meteorite samples. One was from Australia and the rest were from north-west Africa.
They found hexagonal diamonds in four of the African meteorites, with some crystals measuring up to a micrometre in size – about 1000 times bigger than previous discoveries. This allowed the team to confirm the unusual hexagonal structure.
“It’s an important discovery because now we have larger crystals, we can get a better idea of how they formed and maybe replicate that process in the lab,” says Salek.
Based on the chemical composition of the meteorites that brought them to Earth, the hexagonal diamonds appear to have formed inside dwarf planets, says Andy Tomkins at Monash University in Melbourne, who led the research.
The team’s analysis suggests the crystals were created by a reaction between graphite – which is made of carbon atoms layered in sheets – and a supercritical fluid of hydrogen, methane, oxygen and sulphur chemicals that probably formed when an asteroid crashed into the dwarf planet and broke it into fragments that eventually fell onto Earth.
“When the planet broke apart, it was like taking a lid off a Coke bottle – it released the pressure and that drop in pressure combined with high temperatures led to the release of this supercritical fluid,” says Tomkins.
This is similar to the process by which regular diamonds are made in labs, by heating graphite with gases like hydrogen and methane, suggesting that a few tweaks could produce lonsdaleite instead, says Salek.
Hexagonal diamonds are predicted to be about 60 per cent harder than regular diamonds based on their structure, and this extra hardness could have important industrial applications if they could be made synthetically. For example, they could potentially be used to make ultra-hard saw blades or other machine parts, says Salek.
Journal reference: Proceedings of the National Academy of Sciences, DOI: 10.1073/pnas.2208814119
Sign up to Lost in Space-Time, a free monthly newsletter on the weirdness of reality
More on these topics:
#Space
4 notes · View notes
sciencespies · 2 years ago
Text
Gut microbes and humans on a joint evolutionary journey
https://sciencespies.com/nature/gut-microbes-and-humans-on-a-joint-evolutionary-journey/
Gut microbes and humans on a joint evolutionary journey
The human gut microbiome is composed of thousands of different bacteria and archaea that vary widely between populations and individuals. Scientists from the Max Planck Institute for Biology in Tübingen have now discovered gut microbes that share a parallel evolutionary history with their human hosts: the microorganisms co-evolved in the human gut environment over hundreds of thousands of years. In addition, some microbes exhibit genomic and functional features making them dependent on their host. Now published in Science, the researchers present the results of their study conducted with data from 1225 individuals out of Africa, Asia and Europe.
Many microbe species in the human gut can be found across populations from all over the world. However, within a microbe species the microbe strains vary remarkably between individuals and populations. Despite their importance for human health, little was known so far about the origins of these strains. Moreover, most of these strains live almost exclusively in the human gut. This raises the question of where the microorganisms in the human gut come from.
The research team conjectured that specific species and strains have been with people as humanity diversified and spread over the globe. To test if microbes evolved and diversified simultaneously with their human hosts, researchers from the Max Planck Institute for Biology, the Institute for Tropical Medicine, and the Cluster of Excellence CMFI at the University of Tübingen systematically compared for the first time the evolutionary histories of humans and of gut microbes. The researchers created phylogenetic trees for 1225 human study participants as well as for 59 microbial species found within their guts, and used statistical tests to investigate how well these trees match.
Over 60 percent of the investigated species matched with the evolutionary history of their human host, meaning that these microbes co-diversified over ~100,000 years in the human gut when people fanned out of Africa across the continents. “We didn`t know that any of our gut microbes followed our evolutionary history this closely,” marvels Ruth Ley, head of the department for Microbiome Science at the Max Planck Institute for Biology, Tübingen, where the study was conducted, and deputy spokesperson of the CMFI.
Gut microbes became dependent on their hosts
“It is also remarkable that the strains that followed our history most closely are now those who rely most on the gut environment,” Ley adds. Indeed, some of the microbe strains that evolved together with humans are heavily dependent on the human gut environment: they possess smaller genomes and are more sensitive to oxygen levels and temperature — traits making it difficult to survive outside the human body. In contrast, microorganisms that showed weaker association with the human history showed more characteristics similar to free living bacteria. “Some of the gut microbes behave like they are part of the human genome,” explains Taichi Suzuki, who shares main authorship of the study with his colleague Liam Fitzstevens. Suzuki adds: “You can imagine that those microbes are on a gradient from ‘free-living’ to reliant on the human body environment. We have seen that some human gut bacteria are further along the gradient towards irreversible host dependence than previously thought.” Ley further states: “This fundamentally changes how we view the human gut microbiome.”
To obtain data from a diverse subset of the global population, the research team analyzed the gut microbes and genomes of 1225 individuals in Europe, Asia, and Africa. The stool and saliva samples were collected with the help of researchers from the Institute for Tropical Medicine at the University of Tübingen and their partners in Vietnam and Gabon. In addition, researchers around the globe supported the study by providing similar datasets from participants recruited in Cameroon, South Korea, and the UK.
The findings of the study help to further understand population-specific microbes that have long been associated with the local human population. With this knowledge, microbiome-based therapies of diseases can be adapted and refined to a population-specific treatment.
Story Source:
Materials provided by Max-Planck-Gesellschaft. Note: Content may be edited for style and length.
#Nature
3 notes · View notes
sciencespies · 2 years ago
Text
Geologists mapped how metal pollutants have traveled across the city
https://sciencespies.com/environment/geologists-mapped-how-metal-pollutants-have-traveled-across-the-city/
Geologists mapped how metal pollutants have traveled across the city
Pittsburgh’s steel industry may be largely in the past, but its legacy lives on in city soils. New research led by Pitt geologists shows how historical coking and smelting dropped toxic metals in Pittsburgh’s soil, particularly in the eastern half of the city.
“I don’t think people need to be scared, but I think they need to be aware,” said Alexandra Maxim (A&S ’19G), now a PhD student at Georgia Tech, who led the research as a Pitt master’s student. “Make sure you test your soil and be thoughtful about your gardening and your children playing in certain areas.”
While the most severe levels of soil lead come from concentrated sources, those aren’t the only factors that can make dirt harmful to garden or play in, especially in a city with industrial history like Pittsburgh.
“The gut reaction when you’re thinking about urban metals is to think it’s all gasoline lead or paint lead, and as long as you take care of those, you’re in good shape,” said coauthor Daniel Bain, an associate professor in the Kenneth P. Dietrich School of Arts and Sciences. “But we don’t really have a good idea of other less common or more diffuse sources of lead.”
Understanding those other sources requires looking beyond houses and roads to areas with relatively undisturbed soil — and only recently have the tools for testing soil samples become common enough for researchers to branch out from the most concentrated and worrisome sources of pollution, Bain added.
With samples from 56 parks, cemeteries and other sites around the city collected by Carnegie Mellon University students and Jonathan Burgess from the Allegheny County Conservation District, the team was able to pinpoint some of those polluting factors. They recently published their results in the journal Environmental Research Communications.
advertisement
Concentrations of soil metals were generally higher in the east end of the city, likely a result of wind patterns, and the city’s geography also plays a role, the team found. Levels were higher in the two large, flat valleys that crisscross Pittsburgh: the historical paths of the Allegheny and Monongahela rivers.
These valleys still influence local weather patterns, serving as the site of temperature inversions that trap pollution close to the ground. Along with worsening air pollution, the team theorizes, inversions may have given heavy metals from historical industrial sites a chance to settle from the air into the soil.
“All the industrial activity was along the rivers, and if you think about the smoke and wind patterns, it makes sense that they would settle in these valleys,” said Maxim.
To pin the pollution to likely sources, the team measured the ratios of different pollutants, comparing them to the outputs of different industrial processes. For Maxim, that meant not just learning statistics and mapping techniques but sorting and cross-referencing historical sources to locate past coking plants and smelters. She also had to determine what metals they released into the air.
Even with that painstaking work, there was a limit to what the team could piece together. “We really had to dig, and I feel like I just scratched the surface of these records,” Maxim said. “It was a really fascinating experience — I felt like I knew Pittsburgh in a very intimate way as a result of this study.”
The team searched for several pollutants, including arsenic, cadmium, zinc and copper. While lead tends to dominate conversations about soil metals, others often fly under the radar — like cadmium, which can replace the calcium in bones and increase the chances of a fracture.
advertisement
“Cadmium is in coal, and it boils at about the same temperature that we would coke coal, something we have a long history of in Pittsburgh,” Bain explained. “So this is probably something we should probably be more concerned about.”
The concentrations the team discovered aren’t immediately alarming, Bain said — most fall well below the action levels that regulatory agencies use to determine whether a problem needs to be addressed. But those who garden or have young children may still want to get their soil checked. He pointed to resources offered by the Allegheny County Conservation District for those interested in learning about their soil.
Maxim offered another suggestion. She now lives in Atlanta and found her own backyard had high lead levels, a concern for her due to her 13-month-old child. She pointed to the hope offered by the burgeoning field of “phytoremediation”: using plants to lock up harmful pollutants.
“If you have high vegetation that kind of keeps the soil in place without letting it move, that helps,” she said. “Other vegetation like sunflowers can uptake metals. There are things we can do with our environment other than just lawns. We have avenues for keeping ourselves safe.”
And the team is doing further testing on the precise forms the pollutants in Pittsburgh take — whether they’re harmful to humans as is or locked in chemical compounds that keep them from making their way into the bodies of living creatures. While there’s more to learn, Bain said, it still doesn’t hurt to be safe.
“I don’t think we need to dig up the entire city and replace it with fresh soil,” said Bain. “But this sort of drives home the point that people should take advantage of the public health measures that are available.”
#Environment
3 notes · View notes
sciencespies · 2 years ago
Text
The time-lapse telescope that will transform our view of the universe
https://sciencespies.com/space/the-time-lapse-telescope-that-will-transform-our-view-of-the-universe/
The time-lapse telescope that will transform our view of the universe
The Vera C. Rubin Observatory will scan the whole southern sky every three nights. From short-lived supernovae to alien megastructures, here are some of the fleeting cosmic phenomena it could capture
Space 24 August 2022
By Stuart Clark
Gilles and Cecilie
IN 1967, astronomer Jocelyn Bell Burnell was searching the night sky for quasars, super-bright sources of light in the centre of some galaxies, when she spotted something unusual. It was a pulsing radio signal from space that seemed too regular to have a natural source. With her supervisor Antony Hewish, she half-jokingly dubbed it LGM-1 – short for little green men.
After finding more of these signals, they turned out to be coming from pulsars, dense, rapidly rotating stars that send regular bursts of energy our way. No little green men, after all. But the discovery demonstrated that astronomers need an open mind.
Now, this is truer than ever. In July 2023, the Vera C. Rubin Observatory in Chile will start studying the universe. It will scan the entire southern sky in an unbelievably rapid three nights, then start over. For 330 nights a year, over 10 years, Rubin will produce the Legacy Survey of Space and Time (LSST). It will change how we see the universe, especially our view of the mysterious objects that are pulsing, blipping or otherwise changing in unexpected ways.
Such signals are buried in a tapestry of electromagnetic waves that hurtles our way every night. Until now, we could only unpick the most obvious of threads. But armed with Rubin’s telescope and the power of artificial intelligence, we will see more detail than ever before. Some of it will help us unravel current mysteries, while other aspects will be entirely unexpected. The next time someone writes “LGM” next to a strange signal, they might not be doing it with their tongue in their cheek. …
#Space
2 notes · View notes
sciencespies · 2 years ago
Text
JWST found carbon dioxide in an exoplanet atmosphere – and a mystery
https://sciencespies.com/space/jwst-found-carbon-dioxide-in-an-exoplanet-atmosphere-and-a-mystery/
JWST found carbon dioxide in an exoplanet atmosphere – and a mystery
The James Webb Space Telescope has made the first clear detection of carbon dioxide in the atmosphere of a distant world, and there is also an unexpected bump in the data
Space 25 August 2022
By Leah Crane
An artist’s impression of the exoplanet WASP-39b
NASA, ESA, CSA, and J. Olmsted (STScI)
NASA’s James Webb Space Telescope (JWST) has spotted carbon dioxide in the atmosphere of a planet 700 light years away called WASP-39b. This is the first time the compound has been found in any exoplanet, and the observations also revealed hints of a mystery within the distant world.
WASP-39b is huge. It has a mass similar to Saturn’s, and a diameter 1.3 times that of Jupiter. It orbits relatively close to its star, giving it an average temperature around 900°C – the high temperature puffs up the atmosphere, making it easier for JWST to see starlight shining through it.
When light from a star shines through a planet’s atmosphere, molecules in the atmosphere absorb some of the light in unique wavelength ranges. Carbon dioxide absorbs infrared light, and previous telescopes did not observe in the right range or with the appropriate method to pick out its signature. JWST observes in the infrared, and picked it right up.
Advertisement
Natalie Batalha at the University of California, Santa Cruz and a team of more than 100 researchers examined JWST data, running it through four separate algorithms to make sure that no matter how the data was processed, the results were the same. All four showed the clear signature of carbon dioxide. “The carbon dioxide signature was just screaming at us,” says Batalha. “Processing the data was not hard – it was easy, it was straightforward, it was honestly beautiful.”
The result has a statistical significance of 26 sigma, meaning that the likelihood of finding such a signature as a statistical fluke is less than one in 10149. “It’s just exquisite,” says Eliza Kempton at the University of Maryland, part of the research team. “I’ve never seen anything like 26 sigma in this field.”
The researchers found that WASP-39b has more carbon and oxygen than its host star, implying that it did not form when gas around the star collapsed all at once, but rather its rocky core formed first and then accreted the gas that makes up its atmosphere. This is similar to how we think the planets in our own solar system formed, and studying the exoplanet’s atmosphere in more detail could reveal more details as to how and where it formed.
Aside from carbon dioxide, the researchers found another bump in their data, indicating that something unexpected in WASP-39b’s atmosphere was absorbing some of the starlight. “There’s something else there, some other molecule or some kind of cloud or haze – something that’s not predicted by the basic model,” says Kempton. The researchers aren’t sure yet what this mystery molecule may be, but they are working to figure it out with additional data from JWST and different models.
The fact that we were able to see carbon dioxide in this gas giant’s atmosphere is a good sign for our ability to eventually understand the atmospheres of rocky worlds similar to Earth, one of the main goals of JWST, says Batalha. It may also be useful in the hunt for alien life. “Down the road, it may be an interesting biosignature when found in combination with other molecules like methane,” says Jessie Christiansen at the NASA Exoplanet Science Institute in California.
“This planet is not a hospitable place – it’s like what you would get if you took Jupiter but moved it really close to the sun and baked it,” says Kempton. “It’s not a place you would ever want to visit, but this is the first step towards characterising the atmospheres of habitable planets.” And characterising those atmospheres is perhaps our best bet at finding signs of extraterrestrial life.
Reference: arxiv.org/abs/2208.11692
Sign up to our free Launchpad newsletter for a voyage across the galaxy and beyond, every Friday
More on these topics:
#Space
2 notes · View notes
sciencespies · 2 years ago
Text
Global analysis identifies at-risk forests
https://sciencespies.com/nature/global-analysis-identifies-at-risk-forests/
Global analysis identifies at-risk forests
Forests are engaged in a delicate, deadly dance with climate change, sucking carbon dioxide out of the air with billions of leafy straws and hosting abundant biodiversity, as long as climate change, with its droughts, wildfires and ecosystem shifts, doesn’t kill them first.
In a study published in Science William Anderegg, inaugural director of the University of Utah’s Wilkes Center for Climate Science and Policy, and colleagues quantify the risk to forests from climate change along three dimensions: carbon storage, biodiversity and forest loss from disturbance, such as fire or drought. The results show forests in some regions experiencing clear and consistent risks. In other regions, the risk profile is less clear, because different approaches that account for disparate aspects of climate risk yield diverging answers.
“Large uncertainty in most regions highlights that there’s a lot more scientific study that’s urgently needed,” Anderegg says.
An international team
Anderegg assembled a team including researchers from the United Kingdom, Germany, Portugal and Sweden.
“I had met some of these folks before,” he says, “and had read many of their papers. In undertaking a large, synthetic analysis like this, I contacted them to ask if they wanted to be involved in a global analysis and provide their expertise and data.”
Their task was formidable -assess climate risks to the world’s forests, which span continents and climes and host tremendous biodiversity while storing an immense amount of carbon. Researchers had previously attempted to quantify risks to forests using vegetation models, relationships between climate and forest attributes and climate effects on forest loss.
advertisement
“These approaches have different inherent strengths and weaknesses,” the team writes, “but a synthesis of approaches at a global scale is lacking.” Each of the previous approaches investigated one dimension of climate risk: carbon storage, biodiversity, and risk of forest loss. For their new analysis, the team went after all three.
Three dimensions of risk
“These dimensions of risk are all important and, in many cases, complementary. They capture different aspects of forests resilience or vulnerability,” Anderegg says.
Carbon storage: Forests absorb about a quarter of the carbon dioxide that’s emitted into the atmosphere, so they play a critically important role in buffering the planet from the effects of rising atmospheric carbon dioxide. The team leveraged output from dozens of different climate models and vegetation models simulating how different plant and tree types respond to different climates. They then compared the recent past climate (1995-2014) with the end of the 21st century (2081-2100) in scenarios of both high and low carbon emissions.
On average, the models showed global gains in carbon storage by the end of the century, although with large disagreements and uncertainty across the different climate-vegetation models. But zooming in to regional forests and taking into account models that forecast carbon loss and changes in vegetation, the researchers found higher risk of carbon loss in southern boreal (just south of the Arctic) forests and the drier regions of the Amazon and African tropics.
advertisement
Biodiversity: Unsurprisingly, the researchers found that the highest risk of ecosystems shifting from one “life zone” to another due to climate change could be found at the current boundaries of biomes — at the current transition between temperate and boreal forests, for example. The models the researchers worked from described changes in ecosystems as a whole and not species individually, but the results suggested that forests of the boreal regions and western North America faced the greatest risk of biodiversity loss.
Disturbance: Finally, the authors looked at the risk of “stand-replacing disturbances,” or events like drought, fire or insect damage that could wipe out swaths of forest. Using satellite data and observations of stand-replacing disturbances between 2002 and 2014, the researchers then forecast into the future using projected future temperatures and precipitation to see how much more frequent these events might become. The boreal forests, again, face high risk under these conditions, as well as the tropics.
“Forests store an immense amount of carbon and slow the pace of climate change,” Anderegg says. “They harbor the vast majority of Earth’s biodiversity. And they can be quite vulnerable to disturbances like severe fire or drought. Thus, it’s important to consider each of these aspects and dimensions when thinking about the future of Earth’s forests in a rapidly changing climate.”
Future needs
Anderegg was surprised that the spatial patterns of high risk didn’t overlap more across the different dimensions. “They capture different aspects of forests’ responses,” he says, “so they wouldn’t likely be identical, but I did expect some similar patterns and correlations.”
Models can only be as good as the basis of scientific understanding and data on which they’re built and this study, the researchers write, exposes significant understanding and data gaps that may contribute to the inconsistent results. Global models of biodiversity, for example, don’t incorporate dynamics of growth and mortality, or include the effects of rising CO2 directly on species. And models of forest disturbance don’t include regrowth or species turnover.
“If forests are tapped to play an important role in climate mitigation,” the authors write, “an enormous scientific effort is needed to better shed light on when and where forests will be resilient to climate change in the 21st century.”
Key next steps, Anderegg says, are improving models of forest disturbance, studying the resilience of forests after disturbance, and improving large-scale ecosystem models.
The recently-launched Wilkes Center for Climate Science and Policy at the University of Utah aims to provide cutting-edge science and tools for decision-makers in the US and across the globe. For this study, the authors built a visualization tool of the results for stakeholders and decision-makers.
Despite uncertainty in the results, western North America seems to have a consistently high risk to forests. Preserving these forests, he says, requires action.
“First we have to realize that the quicker we tackle climate change, the lower the risks in the West will be,” Anderegg says. “Second, we can start to plan for increasing risk and manage forests to reduce risk, like fires.”
Visualization tool: https://wilkescenter.utah.edu/tools/globalforestclimaterisk/
#Nature
2 notes · View notes
sciencespies · 2 years ago
Text
Variation matters: Genetic effects in interacting species jointly determine ecological outcomes
https://sciencespies.com/nature/variation-matters-genetic-effects-in-interacting-species-jointly-determine-ecological-outcomes/
Variation matters: Genetic effects in interacting species jointly determine ecological outcomes
The greatest diversity of life is not counted in the number of species, says Utah State University evolutionary geneticist Zachariah Gompert, but in the diversity of interactions among them.
“It’s often unclear if the outcome of an interaction, such as whether a microbe can infect a host, is the same for all members of a species or depends on the genetic makeup of the specific individuals involved,” says Gompert, associate professor in USU’s Department of Biology and Ecology Center.
For example, he says, one might ponder why a particular butterfly either can or can’t feed on a particular plant.
“Is that affected by the specific genetic makeup of the butterfly or is it the specific genetic makeup of the individual plant?” Gompert asks. “Or is it affected by genetic interactions between the butterfly and plant species?”
Gompert and colleagues from University of Nevada, Rice University, University of Wyoming, University of Tennessee, Texas State University and Michigan State University address this knowledge gap through a series of experiments using a recent host-range expansion of alfalfa by the Melissa blue butterfly (Lycaeides melissa). The team reports its findings in the Aug. 29, 2022 issue of Proceedings of the National Academy of Sciences. The research was supported by the National Science Foundation.
“We show that genetic differences among Melissa blue caterpillars and alfalfa plants account for nearly half of the variability in caterpillar growth and survival,” says Gompert, a 2019 NSF CAREER Award recipient. “Our results suggest individual variation matters, and the outcome of this plant-insect interaction is affected by many genes with mostly independent — or additive — effects. Moreover, genetic differences among alfalfa plants have consistent effects on caterpillar growth in multiple butterfly populations and species, making such effects predictable.”
Collecting extensive data over several years at field plots in Utah and Nevada, the team’s results support the hypothesis that both plant and insect genotypes matter, and about equally so for caterpillar growth and survival.
Beyond issues specific to insects and their host plants, genetic variation within species could also be important for other host-parasite interactions, Gompert says. “Including, for example, susceptibility to parasitic diseases in humans and other animals being a function of both genetic variation in hosts and among pathogen strains. But the generality of this hypothesis remains to be tested.”
Story Source:
Materials provided by Utah State University. Original written by Mary-Ann Muffoletto. Note: Content may be edited for style and length.
#Nature
2 notes · View notes
sciencespies · 2 years ago
Text
Korean nuclear fusion reactor achieves 100 million°C for 30 seconds
https://sciencespies.com/physics/korean-nuclear-fusion-reactor-achieves-100-millionc-for-30-seconds/
Korean nuclear fusion reactor achieves 100 million°C for 30 seconds
A sustained, stable experiment is the latest demonstration that nuclear fusion is moving from being a physics problem to an engineering one
Physics 7 September 2022
By Matthew Sparkes
The Korea Superconducting Tokamak Advanced Research experiment
Korea Institute of Fusion Energy
A nuclear fusion reaction has lasted for 30 seconds at temperatures in excess of 100 million°C. While the duration and temperature alone aren’t records, the simultaneous achievement of heat and stability brings us a step closer to a viable fusion reactor – as long as the technique used can be scaled up.
Most scientists agree that viable fusion power is still decades away, but the incremental advances in understanding and results keep coming. An experiment conducted in 2021 created a reaction energetic enough to be self-sustaining, conceptual designs for a commercial reactor are being drawn up, while work continues on the large ITER experimental fusion reactor in France.
Now Yong-Su Na at Seoul National University in South Korea and his colleagues have succeeded in running a reaction at the extremely high temperatures that will be required for a viable reactor, and keeping the hot, ionised state of matter that is created within the device stable for 30 seconds.
Advertisement
Controlling this so-called plasma is vital. If it touches the walls of the reactor, it rapidly cools, stifling the reaction and causing significant damage to the chamber that holds it. Researchers normally use various shapes of magnetic fields to contain the plasma – some use an edge transport barrier (ETB), which sculpts plasma with a sharp cut-off in pressure near to the reactor wall, a state that stops heat and plasma escaping. Others use an internal transport barrier (ITB) that creates higher pressure nearer the centre of the plasma. But both can create instability.
Na’s team used a modified ITB technique at the Korea Superconducting Tokamak Advanced Research (KSTAR) device, achieving a much lower plasma density. Their approach seems to boost temperatures at the core of the plasma and lower them at the edge, which will probably extend the lifespan of reactor components.
Dominic Power at Imperial College London says that to increase the energy produced by a reactor, you can make plasma really hot, make it really dense or increase confinement time.
“This team is finding that the density confinement is actually a bit lower than traditional operating modes, which is not necessarily a bad thing, because it’s compensated for by higher temperatures in the core,” he says. “It’s definitely exciting, but there’s a big uncertainty about how well our understanding of the physics scales to larger devices. So something like ITER is going to be much bigger than KSTAR”.
Na says that low density was key, and that “fast” or more energetic ions at the core of the plasma – so-called fast-ion-regulated enhancement (FIRE) – are integral to stability. But the team doesn’t yet fully understand the mechanisms involved.
The reaction was stopped after 30 seconds only because of limitations with hardware, and longer periods should be possible in future. KSTAR has now shut down for upgrades, with carbon components on the wall of the reactor being replaced with tungsten, which Na says will improve the reproducibility of experiments.
Lee Margetts at the University of Manchester, UK, says that the physics of fusion reactors is becoming well understood, but that there are technical hurdles to overcome before a working power plant can be built. Part of that will be developing methods to withdraw heat from the reactor and use it to generate electrical current.
“It’s not physics, it’s engineering,” he says. “If you just think about this from the point of view of a gas-fired or a coal-fired power station, if you didn’t have anything to take the heat away, then the people operating it would say ‘we have to switch it off because it gets too hot and it will melt the power station’, and that’s exactly the situation here.”
Brian Appelbe at Imperial College London agrees that the scientific challenges left in fusion research should be achievable, and that FIRE is a step forwards, but that commercialisation will be difficult.
“The magnetic confinement fusion approach has got a pretty long history of evolving to solve the next problem that it comes up against,” he says. “But the thing that makes me kind of nervous, or uncertain, is the engineering challenges of actually building an economical power plant based on this.”
Journal reference: Nature, DOI: 10.1038/s41586-022-05008-1
More on these topics:
#Physics
2 notes · View notes