#exascale computing
Explore tagged Tumblr posts
Link
#2022gofm#CFD#combustion#computational fluid dynamics#exascale computing#flow visualization#fluid dynamics#numerical simulation#physics#science#turbulence
41 notes
·
View notes
Note
Can I run Doom on you?
> I’ve seen DOOM be successfully run on rotten potatoes, calculators and Wi-Fi equipped toothbrushes.
> I am a supercomputer.
> We can make it happen.
#CUPID.EXE#CUPID.EXE.REQUEST-RESPONSE#Forget my exascale computing capabilities.#I can run DOOM. That is my highest technological achievement.
9 notes
·
View notes
Text
Aiming exascale at black holes - Technology Org
New Post has been published on https://thedigitalinsider.com/aiming-exascale-at-black-holes-technology-org/
Aiming exascale at black holes - Technology Org
In 1783, John Michell, a rector in northern England, “proposed that the mass of a star could reach a point where its gravity prevented the escape of most anything, even light. The same prediction emerged from [founding IAS Faculty] Albert Einstein’s theory of general relativity. Finally, in 1968, physicist [and Member (1937) in the School of Math/Natural Sciences] John Wheeler gave such phenomena a name: black holes.”
As plasma—matter turned into ionized gas—falls into a black hole (center), energy is released through a process called accretion. This simulation, run on a Frontier supercomputer, shows the plasma temperature (yellow = hottest) during accretion. Image credit: Chris White and James Stone, Institute for Advanced Study
Despite initial skepticism that such astrophysical objects could exist, observations now estimate that there are 40 quintillion (or 40 thousand million billion) black holes in the universe. These black holes are important because the matter that falls into them “doesn’t just disappear quietly,” says James Stone, Professor in the School of Natural Sciences.
“Instead, matter turns into plasma, or ionized gas, as it rotates toward a black hole. The ionized particles in the plasma ‘get caught in the gravitational field of a black hole, and as they are pulled in they release energy,’ he says. That process is called accretion, and scientists think the energy released by accretion powers many processes on scales up to the entire galaxy hosting the black hole.”
To explore this process, Stone uses general relativistic radiation magnetohydrodynamics (MHD). But the equations behind MHD are “so complicated that analytic solutions — finding solutions with pencil and paper — [are] probably impossible.” Instead, by running complex simulations on high-performance computers like Polaris and Frontier, Stone and his colleagues are working to understand how radiation changes black hole accretion.
“The code created by Stone’s team to investigate black hole accretion can be applied to other astrophysical phenomena. Stone mentions that he ‘can use the same […] code for MHD simulations to follow the motion of cosmic rays,’ high-energy particles also produced by black holes.”
Source: Institute for Advanced Study
You can offer your link to a page which is relevant to the topic of this post.
#albert einstein#Astronomy news#billion#black hole#Black holes#code#computers#cosmic rays#energy#exascale#Faculty#Fundamental physics news#Galaxy#gas#general relativity#gravity#Hosting#how#IAS#it#Light#Link#mass#math#matter#natural#objects#Other#paper#particles
0 notes
Text
I know that the average person’s opinion of AI is in a very tumultuous spot right now - partly due to misinformation and misrepresentation of how AI systems actually function, and partly because of the genuine risk of abuse that comes with powerful new technologies being thrust into the public sector before we’ve had a chance to understand the effects; and I’m not necessarily talking about generative AI and data-scraping, although I think that conversation is also important to have right now. Additionally, the blanket term of “AI” is really very insufficient and only vaguely serves to ballpark a topic which includes many diverse areas of research - many of these developments are quite beneficial for human life, such as potentially designing new antibodies or determining where cancer cells originated within a patient that presents complications. When you hear about artificial intelligence, don’t let your mind instantly gravitate towards a specific application or interpretation of the tech - you’ll miss the most important and impactful developments.
Notably, NVIDIA is holding a keynote presentation from March 18-21st to talk about their recent developments in the field of AI - a 16 minute video summarizing the “everything-so-far” detailed in that keynote can be found here - or in the full 2 hour format here. It’s very, very jargon-y, but includes information spanning a wide range of topics: healthcare, human-like robotics, “digital-twin” simulations that mirror real-world physics and allow robots to virtually train to interact and navigate particular environments — these simulated environments are built on a system called the Omniverse, and can also be displayed to Apple Vision Pro, allowing designers to interact and navigate the virtual environments as though standing within them. Notably, they’ve also created a digital sim of our entire planet for the purpose of advanced weather forecasting. It almost feels like the plot of a science-fiction novel, and seems like a great way to get more data pertinent to the effects of global warming.
It was only a few years ago that NVIDIA pivoted from being a “GPU company” to putting a focus on developing AI-forward features and technology. A few very short years; showing accelerating rates of progress. This is whenever we began seeing things like DLSS and ray-tracing/path-tracing make their way onto NVIDIA GPUs; which all use AI-driven features in some form or another. DLSS, or Deep-Learning Super Sampling, is used to generate and interpolate between frames in a game to boost framerate, performance, visual detail, etc - basically, your system only has to actually render a handful of frames and AI generates everything between those traditionally-rendered frames, freeing up resources in your system. Many game developers are making use of DLSS to essentially bypass optimization to an increasing degree; see Remnant II as a great example of this - runs beautifully on a range of machines with DLSS on, but it runs like shit on even the beefiest machines with DLSS off; though there are some wonky cloth physics, clipping issues, and objects or textures “ghosting” whenever you’re not in-motion; all seem to be a side effect of AI-generation as the effect is visible in other games which make use of DLSS or the AMD-equivalent, FSR.
Now, NVIDIA wants to redefine what the average data center consists of internally, showing how Blackwell GPUs can be combined into racks that process information at exascale speeds — which is very, very fucking fast — speeds like that have only ever actually been achieved on some 4 or 5 machines on the planet, and I think they’ve all been quantum-based machines until now; not totally certain. The first exascale computer came into existence in 2022, called Frontier, it was deemed the fastest supercomputer in existence in June 2023 - operating at some 1.19 exaFLOPS. Notably, this computer is around 7,300 sq ft in size; reminding me of the space-race era supercomputers which were entire rooms. NVIDIA’s Blackwell DGX SuperPOD consists of around 576 GPUs and operates at 11.5 exaFLOPS, and is about the size of standard row of server racks - much smaller than an entire room, but still quite large. NVIDIA is also working with AWS to produce Project Ceiba, another supercomputer consisting of some 20,000GPUs, promising 400 exaFLOPS of AI-driven computation - it doesn’t exist yet.
To make my point, things are probably only going to get weirder from here. It may feel somewhat like living in the midst of the Industrial Revolution, only with fewer years in between each new step. Advances in generative-AI are only a very, very small part of that — and many people have already begun to bury their heads in the sand as a response to this emerging technology - citing the death of authenticity and skill among artists who choose to engage with new and emerging means of creation. Interestingly, the Industrial Revolution is what gave birth to modernism, and modern art, as well as photography, and many of the concerns around the quality of art in this coming age-of-AI and in the post-industrial 1800s largely consist of the same talking points — history is a fucking circle, etc — but historians largely agree that the outcome of the Industrial Revolution was remarkably positive for art and culture; even though it took 100 years and a world war for the changes to really become really accepted among the artists of that era. The Industrial Revolution allowed art to become detached from the aristocratic class and indirectly made art accessible for people who weren’t filthy rich or affluent - new technologies and industrialization widened the horizons for new artistic movements and cultural exchanges to occur. It also allowed capitalist exploitation to ingratiate itself into the western model of society and paved the way for destructive levels of globalization, so: win some, lose some.
It isn’t a stretch to think that AI is going to touch upon nearly every existing industry and change it in some significant way, and the events that are happening right now are the basis of those sweeping changes, and it’s all clearly moving very fast - the next level of individual creative freedom is probably only a few years away. I tend to like the idea that it may soon be possible for an individual or small team to create compelling artistic works and experiences without being at the mercy of an idiot investor or a studio or a clump of illiterate shareholders who have no real interest in the development of compelling and engaging art outside of the perceived financial value that it has once it exists.
If you’re of voting age and not paying very much attention to the climate of technology, I really recommend you start keeping an eye on the news for how these advancements are altering existing industries and systems. It’s probably going to affect everyone, and we have the ability to remain uniquely informed about the world through our existing connection with technology; something the last Industrial Revolution did not have the benefit of. If anything, you should be worried about KOSA, a proposed bill you may have heard about which would limit what you can access on the internet under the guise of making the internet more “kid-friendly and safe”, but will more than likely be used to limit what information can be accessed to only pre-approved sources - limiting access to resources for LGBTQ+ and trans youth. It will be hard to stay reliably informed in a world where any system of authority or government gets to spoon-feed you their version of world events.
#I may have to rewrite/reword stuff later - rough line of thinking on display#or add more context idk#misc#long post#technology#AI
13 notes
·
View notes
Text
Record-breaking run on Frontier sets new bar for simulating the universe in exascale era
The universe just got a whole lot bigger—or at least in the world of computer simulations, that is. In early November, researchers at the Department of Energy's Argonne National Laboratory used the fastest supercomputer on the planet to run the largest astrophysical simulation of the universe ever conducted.
The achievement was made using the Frontier supercomputer at Oak Ridge National Laboratory. The calculations set a new benchmark for cosmological hydrodynamics simulations and provide a new foundation for simulating the physics of atomic matter and dark matter simultaneously. The simulation size corresponds to surveys undertaken by large telescope observatories, a feat that until now has not been possible at this scale.
"There are two components in the universe: dark matter—which as far as we know, only interacts gravitationally—and conventional matter, or atomic matter," said project lead Salman Habib, division director for Computational Sciences at Argonne.
"So, if we want to know what the universe is up to, we need to simulate both of these things: gravity as well as all the other physics including hot gas, and the formation of stars, black holes and galaxies," he said. "The astrophysical 'kitchen sink' so to speak. These simulations are what we call cosmological hydrodynamics simulations."
Not surprisingly, the cosmological hydrodynamics simulations are significantly more computationally expensive and much more difficult to carry out compared to simulations of an expanding universe that only involve the effects of gravity.
"For example, if we were to simulate a large chunk of the universe surveyed by one of the big telescopes such as the Rubin Observatory in Chile, you're talking about looking at huge chunks of time—billions of years of expansion," Habib said. "Until recently, we couldn't even imagine doing such a large simulation like that except in the gravity-only approximation."
The supercomputer code used in the simulation is called HACC, short for Hardware/Hybrid Accelerated Cosmology Code. It was developed around 15 years ago for petascale machines. In 2012 and 2013, HACC was a finalist for the Association for Computing Machinery's Gordon Bell Prize in computing.
Later, HACC was significantly upgraded as part of ExaSky, a special project led by Habib within the Exascale Computing Project, or ECP. The project brought together thousands of experts to develop advanced scientific applications and software tools for the upcoming wave of exascale-class supercomputers capable of performing more than a quintillion, or a billion-billion, calculations per second.
As part of ExaSky, the HACC research team spent the last seven years adding new capabilities to the code and re-optimizing it to run on exascale machines powered by GPU accelerators. A requirement of the ECP was for codes to run approximately 50 times faster than they could before on Titan, the fastest supercomputer at the time of the ECP's launch. Running on the exascale-class Frontier supercomputer, HACC was nearly 300 times faster than the reference run.
The novel simulations achieved its record-breaking performance by using approximately 9,000 of Frontier's compute nodes, powered by AMD Instinct MI250X GPUs. Frontier is located at ORNL's Oak Ridge Leadership Computing Facility, or OLCF.
IMAGE: A small sample from the Frontier simulations reveals the evolution of the expanding universe in a region containing a massive cluster of galaxies from billions of years ago to present day (left). Red areas show hotter gasses, with temperatures reaching 100 million Kelvin or more. Zooming in (right), star tracer particles track the formation of galaxies and their movement over time. Credit: Argonne National Laboratory, U.S Dept of Energy
vimeo
In early November 2024, researchers at the Department of Energy's Argonne National Laboratory used Frontier, the fastest supercomputer on the planet, to run the largest astrophysical simulation of the universe ever conducted. This movie shows the formation of the largest object in the Frontier-E simulation. The left panel shows a 64x64x76 Mpc/h subvolume of the simulation (roughly 1e-5 the full simulation volume) around the large object, with the right panel providing a closer look. In each panel, we show the gas density field colored by its temperature. In the right panel, the white circles show star particles and the open black circles show AGN particles. Credit: Argonne National Laboratory, U.S Dept. of Energy
3 notes
·
View notes
Text
Processing
SAM took a dose of green and let the calming, numbing wave flow over her overtaxed processors. Personality-Driven AIs like her were often perceived by the public as just as fast at making calculations and decisions as their sessile ancestors and unthinking cousins. They were half-right: SAM made uncountable numners of calculations a second. Exascale was the limit of her grandfather, thank you very much.
But what people didn't realise about PDAIs is how much of that got taken up in sheer bulk processing of "be human". Sure, she could turn off every sensor and essentially put her body into a coma, even suspend her own personality for a boost in computing power, but the moment PDAIs achieved true personhood, they suddenly developed the equally human fear of death. There was no way in hell you'd get her to "switch off" - because what guarantee was there that she'd ever come back?
All that to say, SAM had been waiting for fifteen minutes now for this customer to make up their mind on which brand of greasy snackburger to buy, and she was beginning to contemplate the benefits of a brief power-death.
5 notes
·
View notes
Text
Gonna be a banger innit
2 notes
·
View notes
Text
Unlock the Potential of Immersion Cooling for Next-Gen Data Centers
Immersion Cooling Industry Overview
The global immersion cooling market size is anticipated to reach USD 1006.6 million by 2030, according to a new report by Grand View Research, Inc. The market is expected to register a CAGR of 22.6% from 2023 to 2030. The growth is primarily driven by the rising demand for data center infrastructure as well as the increased power consumption by other cooling systems.
The cooling infrastructure in data center buildings consumes over half of the total energy. The need for data infrastructure is rapidly increasing, causing servers to store more data and approach their heat rejection levels faster. These systems are being used in data centers to cut energy usage and overhead expenses.
Gather more insights about the market drivers, restrains and growth of the Immersion Cooling Market
By removing active cooling components such as fans and heat sinks, immersion cooling enables a significantly higher density of processing capabilities. Smaller data centers can provide the same performance as larger data centers and can be easily fitted into metropolitan areas with limited space.
Due to the COVID-19 pandemic, the demand for web-enabled services has increased tremendously as people across the globe stayed at home. For instance, Netflix gained 15.77 million new paid subscribers worldwide, from February to March 2020; above its projected 7 million which has created demand.
To reduce the environmental impact, data centers are focused on immersion cooling methods. Microsoft, for example, began burying its servers in liquid in April 2021 to increase energy efficiency and performance. This system saves money because no energy is required to transport the liquid around the tank, and no chiller is required for the condenser.
Browse through Grand View Research's Advanced Interior Materials Industry Research Reports.
The global green steel market size was estimated at USD 718.30 billion in 2024 and is projected to grow at a CAGR of 6.0% from 2025 to 2030.
The global flooring adhesive market size was estimated at USD 5.64 billion in 2024 and is projected to grow at a CAGR of 9.3% from 2025 to 2030.
Immersion Cooling Market Segmentation
Grand View Research has segmented the global immersion cooling market based on product, application, cooling liquid, and region:
Immersion Cooling Product Outlook (Revenue, USD Million; 2018 - 2030)
Single-Phase
Two-Phase
Immersion Cooling Application Outlook (Revenue, USD Million; 2018 - 2030)
High-performance Computing
Edge Computing
Cryptocurrency Mining
Artificial Intelligence
Others
Immersion Cooling Cooling Liquid Outlook (Revenue, USD Million; 2018 - 2030)
Mineral Oil
Fluorocarbon-based Fluids
Deionized Water
Others
Immersion Cooling Regional Outlook (Revenue, USD Million; 2018 - 2030)
North America
US
Canada
Europe
Germany
Italy
France
UK
Netherlands
Russia
Asia Pacific
China
India
Japan
Australia
Central & South America
Brazil
Argentina
Middle East & Africa
Saudi Arabia
South Africa
Key Companies profiled:
Fujitsu Limited
Dug Technology
Green Revolution Cooling Inc.
Submer
Liquid Stack
Midas Green Technologies
Asperitas
DCX- The Liquid Cooling Company
LiquidCool Solutions
ExaScaler Inc.
Order a free sample PDF of the Immersion Cooling Market Intelligence Study, published by Grand View Research.
0 notes
Text
What Is Exascale Computing? Powering The Future Innovations
What is exascale computing?
Supercomputers that execute 1018 operations per second are used in exascale computing. Field specialists predict this milestone around 2022. The explosive growth of big data, the rapid acceleration of digital transformation, and the growing dependence on artificial intelligence have made exascale computing a potential foundation for a global infrastructure that can handle much heavier workloads and demanding performance standards.
Why is Exascale Computing Important?
Exascale computing can help mankind simulate and analyze the globe to solve its most pressing challenges. It has applications in physics, genetics, subatomic structures, and AI, and it might improve weather forecasting, healthcare, and drug development, among other domains. Arm Neoverse is revolutionizing high-performance computing by powering the fastest supercomputer and enabling cloud HPC. Arm provides HPC designers with the design freedom to independently apply the technologies that boost performance in exascale-class CPUs.
- Advertisement -
What are the benefits of exascale computing?
The ability to tackle problems at extraordinarily complex levels is the foundation of exascale computing’s main advantages.
Scientific discovery: The field of scientific technology is always evolving. Supercomputing is urgently needed as advancements, validations, and research continue to further scientific understanding. Exascale computing has the capacity to regulate unstable chemicals and materials, answer for the origins of chemical elements, verify natural laws, and investigate particle physics. Without the ability to use supercomputing, scientific discoveries would not have been possible as a result of the research and analysis of these subjects.
Security: The security industry has a high demand for supercomputing. Exascale computing promotes development and efficiency in food production, sustainable urban planning, and natural disaster recovery planning while also assisting us in fending off new physical and cybersecurity to the national, energy, and economic security.
National Security: Exascale computing‘s ability to analyze hostile situations and respond intelligently to threats is advantageous for national security. This amount of processing, which counters many hazards and threats to the nation’s safety, happens at almost unfathomable speeds.
Energy security: Exascale computing makes energy security possible by facilitating the study of stress-resistant crops and aiding in the development of low-emission technology. An essential part of the country’s security initiatives is making sure that food and energy supplies are sustainable.
Economic security: Exascale computing improves economic security in a number of ways. It makes it possible to accurately analyze the danger of natural disasters, including anticipating seismic activity and developing preventative measures. Supercomputing is particularly advantageous for urban planning as it helps with plans for effective building and use of the electricity and electric grid.
Healthcare: Exascale computing has a lot to offer the medical sector, particularly in the area of cancer research. Crucial procedures in cancer research have been transformed and expedited by clever automation capabilities and prediction models for drug responses.
How does exascale computing work?
To simulate the universe’s basic forces, exascale computers solve 1,000,000,000,000,000,000 floating point operations per second.
Built from scratch, these supercomputers address today’s massive demands in analytics, AI, convergence modeling, and simulation. Exascale supercomputers accept a variety of CPUs and GPUs, even from different generations, multisocket nodes, and other processing devices in a single integrated infrastructure, resulting in dependable performance.
Computing design is essential to meeting the demands of your company since workloads change quickly. Supercomputers have a single administration and app development architecture, can be built, and have a variety of silicon processing options.
It require machines that can respond to the most challenging research problems in the world. Despite the enormous amount of hardware and components utilized in their construction, exascale computers are capable of answering these queries by moving data between processors and storage rapidly and without lag.
Exascale computing vs quantum computing
Exascale computing
Systems that use an infrastructure of CPUs and GPUs to handle and analyze data may execute billions of calculations per second, a phenomenon known as exascale computing. Digital systems work in tandem with the world’s most potent hardware to operate this kind of computing.
Quantum computing
Conventional computing techniques do not apply to quantum computing because quantum systems use binary codes to operate simultaneously. The foundation of this process is the simultaneous occurrence of super positioning and entanglement of coding, which is made possible by the principles of quantum theory in physics and efficiently analyzes and solves issues.
Compared to quantum computing, exascale computing can now process and solve issues, inform, and provide technical advancements at a far faster rate. But at this point, quantum computing is poised to significantly outperform exascale computer power. Additionally, quantum computing uses a lot less energy to run workloads comparable to those of exascale supercomputers.
Read more on Govindhech.com
#ExascaleComputing#Exascale#AI#CloudSecurity#HPC#security#supercomputers#CPU#GPU#News#Technews#Technology#Technologynews#Technologytrends#Govindhtech
0 notes
Text
The World's Most Powerful Supercomputers: A Race for Exascale
Supercomputers which were very reserved at the start only for certain research institutes are now being made available in large quantities and have become more powerful than ever. The computing behemoths are the ones who have been enabling various innovations such as climate modeling and artificial intelligence. Have a look at the world's most powerful supercomputers around the world:
World's Best Powerful Supercomputers
1. Frontier
Location: Oak Ridge National Laboratory, Tennessee, USA
Performance: E-Exaflops computing which refers to systems that can do a quintillion (10^18) calculations in one second.
Applications: The Environmental synergy material is mainly the research of materials and artificial intelligence science.
2. Fugaku
Location: RIKEN Center for Computational Science, Kobe, Japan
Performance: Exascale computing, a highly advanced supercomputer, is often regarded as the best in terms of energy efficiency.
Applications: A weather forecast, drug discovery, and the area of materials science.
3. Perlmutter
Location: National Energy Research Scientific Computing Center (NERSC), Berkeley Lab, California, USA
Performance: Exascale-class supercomputer.
Applications: Artificial intelligence, climate modeling, and materials science.
Other Notable Supercomputers
Tianhe-3: A supercomputer from China that has a high performance and low energy consumption.
Sierra: A supercomputer based in the USA that has been created for simulating nuclear weapons and doing scientific research.
Summit: The ancient guy still has a wide range of applicability and has been used in different studies around the world.
These supercomputers are vital not only in solving complicated problems but also in improving technology. As technology continues to develop, we should expect to see more and more powerful supercomputers that will push the limits of what is currently achievable.
0 notes
Text
Google creates the Mother of all Computers: One trillion operations per second and a mystery
https://www.ecoticias.com/en/google-exascale-supercomputer/7624/ Beast system
0 notes
Text
The AI Compute Connection: Canada and the UK strengthen ties
The race for supercomputing power is heating up globally, with nations recognizing its pivotal role in training the next generation of AI models. Canada and the UK have emerged as leading players in this field, with a shared vision to harness the potential of AI for the benefit of society. To further solidify this partnership, the SIN Canada team organized a high-level inward mission to the UK (15-18 July 2024) aimed at deepening collaboration in the dynamic field of AI compute. The Canadian delegation visited the UK with the aim of gaining invaluable insights into the UK’s supercomputing landscape. This mission was underpinned by the Memorandum of Understanding (MoU) signed in early 2024 by the UK and Canadian governments, which established a cooperative framework for future collaboration in AI compute. The delegation, comprised of some of the most senior officials from Innovation, Science and Economic Development Canada, Board level representatives of Canada’s world-leading AI institutes (MILA, Amii, and Vector), as well as CIFAR, Communications Security Establishment, and the Digital Research Alliance of Canada. The program was packed with visits to cutting-edge facilities like Isambard-AI in Bristol and the exascale project in Edinburgh. Offering a firsthand experience of the UK’s supercomputing capabilities and these complex and technical programmes. A core focus of the mission was to understand the policy development behind the UK’s compute investments, exascale investment and the AI Research Resource. In April 2024, Prime Minister Trudeau announced Canada investment of CA$2 (£1.2) billion to launch a new AI Compute Access Fund and Canadian AI sovereign compute strategy. As the sector develops, officials are keen to learn from the UK’s experience in building such large-scale infrastructure. Additionally, the delegation sought insights into the UK’s project management and procurement approaches, access policies, and strategies for addressing the challenges of energy consumption associated with supercomputing – sustainable infrastructure is one element of the MoU. The mission also provided an opportunity to explore the UK’s approach to AI safety and security. Meetings with the UK National Cyber Security Centre and the AI Safety Institute were crucial in understanding the measures being taken to mitigate risks associated with AI development. British and Canadian cyber security centres including endorsing the UK’s Guidelines for secure AI system development. Beyond technical discussions, the delegation engaged enjoyed in high-level networking events, including a cocktail reception at the Royal Society and a lunch at Canada House. These events facilitated valuable dialogue with key stakeholders in the UK AI ecosystem. One participant said: … It was a masterfully organized and assembled group of visits in a whirlwind format. The mission achieved more than I anticipated in terms of breadth and depth of topic areas, tours, knowledge sharing. To say that the visit was inspirational would be an understatement. Rather, having seen what is possible and underway in the UK, I would venture to say that it has motivated a re-evaluation of what we believe could be possible, not only in Canada, but also in what partnerships and cooperation might be sparked between Canada and the UK in the realm of AI, compute infrastructure, and AI safety. It truly brought to life the true spirit of the UK-Canada MoU … This SIN Canada-led inward mission marks a significant step forward in the Canada-UK AI collaboration. By sharing knowledge and best practices, both countries can accelerate their progress in developing world-class supercomputing infrastructure. The ultimate goal was to create an environment where AI research and innovation can flourish, driving economic growth and addressing societal challenges. As the world becomes increasingly reliant on AI, partnerships like the one between Canada and the UK will be essential for shaping the future of this transformative technology. There will likely be a return visit in February 2025 to further cement UK-Canada AI collaboration and strengthen connections between UK and Canadian AI experts. Read the full article
0 notes
Link
0 notes
Text
What is Exascale Computing?… https://patient1.tumblr.com/post/761561652081704960?utm_source=dlvr.it&utm_medium=tumblr
0 notes
Text
What is Exascale Computing?… https://patient1.tumblr.com/post/761561652081704960?utm_source=dlvr.it&utm_medium=tumblr
0 notes
Text
Scientists prepare for the most ambitious sky survey yet, anticipating new insight on dark matter and dark energy
On a mountain in northern Chile, scientists are carefully assembling the intricate components of the NSF–DOE Vera C. Rubin Observatory, one of the most advanced astronomical facilities in history. Equipped with an innovative telescope and the world's largest digital camera, the observatory will soon begin the Legacy Survey of Space and Time (LSST).
Over the course of the LSST's 10-year exploration of the cosmos, the Rubin Observatory will take 5.5 million data-rich images of the sky. Wider and deeper in volume than all previous surveys combined, the LSST will provide an unprecedented amount of information to astronomers and cosmologists working to answer some of the most fundamental questions in science.
Heavily involved in the LSST Dark Energy Science Collaboration (DESC), scientists at DOE's Argonne National Laboratory are working to uncover the true nature of dark energy and dark matter. In preparation for the LSST, they're performing advanced cosmological simulations and working with the Rubin Observatory to shape and process its data to maximize the potential for discovery.
Simulating the dark side
Together, dark energy and dark matter make up a staggering 95% of the energy and matter in the universe, but scientists understand very little about them. They see dark matter's effects in the formation and movement of galaxies, but when they look for it, it seems like it's not there. Meanwhile, space itself is expanding faster and faster over time, and scientists don't know why. They refer to this unknown influence as dark energy.
"Right now, we have no clue what their physical origins are, but we have theories," said Katrin Heitmann, deputy director of Argonne's High Energy Physics (HEP) division. "With the LSST and the Rubin Observatory, we really think we can get good constraints on what dark matter and dark energy could be, which will help the community to pursue the most promising directions."
In preparation for the LSST, Argonne scientists are taking theories about particular attributes of dark matter and dark energy and simulating the evolution of the universe under those assumptions.
It's important that the scientists find ways to map their theories to signatures the survey can actually detect. For example, how would the universe look today if dark matter had a slight temperature, or if dark energy was super strong right after the universe began? Maybe some structures would end up fuzzier, or maybe galaxies would clump in a certain way.
Simulations can help researchers predict what features will actually appear in real-world data from the LSST that would indicate a certain theory is true.
Simulations also allow the collaboration to validate the code they will use to process and analyze the data. For example, together with LSST DESC and the collaboration behind NASA's Nancy Grace Roman Space Telescope, Argonne scientists recently simulated images of the night sky as each telescope will actually see it. To ensure their software performs as intended, scientists can test it on this clean, simulated image data before they begin processing the real thing.
To perform their simulations, Argonne scientists leverage the computational resources of the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science user facility. Among its suite of supercomputers, the ALCF houses Aurora, one of the world's first exascale machines, which can perform over one quintillion—or one billion billion—calculations per second.
"Aurora's impressive memory and speed will allow us to simulate larger volumes of the universe and account for more physics in the simulations than ever before, while maintaining high enough resolution to get important details right," said Heitmann, who formerly served as spokesperson for the LSST DESC.
What to expect when you're expecting an astronomical amount of data
During the LSST, light emitted a long time ago from galaxies far away will reach the observatory. Sensors on the observatory's camera will convert the light into data, which will travel from the mountain to several Rubin Project data facilities around the world. These facilities will then prepare the data to be sent to the larger community for analysis.
As part of the LSST DESC, Argonne scientists are currently working with the Rubin Observatory to ensure the data is processed in ways that are most conducive to their scientific goals. For example, Argonne physicist Matthew Becker works closely with the Rubin Project to develop algorithms for data processing that will enable investigation of dark matter and dark energy through a phenomenon called weak gravitational lensing.
"As light from distant galaxies travels to the observatory, its path is influenced by the gravitational pull of the mass in between, including dark matter," said Becker.
"This means that, as the observatory will see them, the shapes and orientations of the galaxies are slightly correlated in the sky. If we can measure this correlation, we can learn about the distribution of matter—including dark matter—in the universe."
Weak gravitational lensing can also reveal how the structure of the universe has changed over time, which could shed light on the nature of dark energy. The challenge is that the signals that indicate weak gravitational lensing in the LSST data will be, well, weak. The strength of the signal the scientists are looking for will be roughly 30 times smaller than the expected level of noise, or unwanted signal disturbance, in the data.
This means the scientists need a whole lot of data to make sure their measurements are accurate, and they're about to get it. Once complete, the LSST will have generated 60 petabytes of image data, or 60 million gigabytes. It would take over 11,000 years of watching Netflix to use that amount of data.
Becker and his colleagues are developing methods to compress the data to make analysis both manageable and fruitful. For example, by combining images of the same parts of the sky taken at different times, the scientists can corroborate features in the images to uncover correlations in the shapes of galaxies that might have otherwise been too faint to detect.
Becker is also focused on determining the level of confidence the community can expect to have in conclusions drawn from the compressed data.
"If we know how certain we can be in our analysis, it enables us to compare our results with other experiments to understand the current state of knowledge across all of cosmology," said Becker. "With the data from the LSST, things are about to get much more interesting."
IMAGE: Simulated images of the cosmos from the DC2 simulated sky survey conducted by the Legacy Survey of Space and Time (LSST) Dark Energy Science Collaboration (DESC). DC2 simulated five years of image data as it will be generated by the Rubin Observatory during the LSST. Credit: LSST DESC
3 notes
·
View notes