#EPYCCPUs
Explore tagged Tumblr posts
Text
Top Oracle Exadata Databases Run Faster On AMD EPYC CPUs
The best databases in the world perform better on AMD EPYC processors.
Oracle Exadata
They at AMD think that their EPYC processors are the finest data center CPUs available. Their clients concur. EPYC processors have become widely used, as shown by the fact that they run the Frontier system at Oak Ridge National Laboratory, the most powerful supercomputer in the world, and that they are used in over 800 public cloud instances across all major cloud service providers. Businesses are using more and more EPYC CPUs to power their on-premise infrastructures across all business verticals.
Indeed, AMD EPYC now holds over 300 world records for efficiency and performance, both on-site and in the cloud. However, did you realize that data management and analytics account for more than 50 of those records? This indicates that AMD EPYC is the best option available for processing the most important and fundamental business tasks in the globe.
The two databases that big businesses use the most often are Oracle and Microsoft SQL Server. AMD’s 4th Gen EPYC CPUs provide impressive performance whether you’re using Microsoft SQL Server or the Oracle Exadata Database Platform. First, let’s examine the Oracle Exadata platform to understand how AMD EPYC has contributed to the advancement of database computing.
The Database Platform for Oracle Exadata
Managing workloads for Oracle Exadata databases is a major endeavor. Exadata offers real-time vector search capabilities, analytics, and fast transaction processing. Tasks that are vital to the mission may be handled by the Exadata system. These workloads allow for the instantaneous completion of online financial transactions while instantly detecting fraudulent activity. These database workloads help carriers maintain their networks and keep up with the increasing needs of the worldwide 5G network. They also keep the internet transactions flowing, particularly during the peak shopping seasons of Cyber Monday and Single’s Day.
Oracle developed the Oracle Exadata Database Platform to provide performance, scalability, availability, and security while supporting these workloads. This platform, which includes hardware, software, and an Oracle database, is simple to set up.
Oracle made a significant shift in 2023 after depending on Intel CPUs for years to power the Oracle Exadata Database Platform. Oracle decided to employ the 4th generation AMD EPYC CPU (previously code-named “Genoa”) in the most recent version X10M Exadata system in order to achieve better performance and energy efficiency.
Exadata Oracle
Think about the impact that EPYC CPUs have: Compared to the preceding Exadata X9M-2 system, the Exadata X10M Database Platform features three times more database server cores per socket from 32 to 96 cores thanks to AMD EPYC CPUs. More EPYC CPU cores simply translate into more database transactions and quicker analytics when combined with Oracle’s optimized software.
Meanwhile, the dual-socket X10M Database server can now tackle tasks that previously needed pricey and power-hungry 8-socket servers thanks to its remarkable core count. An organization may recover precious space in the data center and save energy by using fewer, smaller servers to do the same tasks.
X10M Database server
In order to ensure that database performance grows linearly with a core count of up to 192 per Database Server (2 X 96 core CPUs), AMD and Oracle worked together to enhance the performance of the new Exadata X10M server. Exadata system software, on the other hand, can encrypt and decrypt data quicker than other components can get it to the chip thanks to the 4th generation AMD EPYC CPU. Furthermore, the Exadata system software was tailored to take full use of AMD EPYC’s reliability, availability, and serviceability (RAS) features. One example of this is Platform First Error Handling, which improves the uptime of workloads related to key databases.
Microsoft SQL Server
Let’s now examine Microsoft SQL Server and see how AMD EPYC leads the industry in database transaction processing speed. Businesses often utilize the TPC (Transaction Processing Performance Council) benchmarks to assess processor performance as well as the overall performance of server systems. To guarantee the validity of the benchmark test findings, TPC.org has an audited library of benchmarks. Let’s examine two widely used TPC benchmarks to assess database performance: TPC-E and TPC-H.
Decision support systems that analyze vast amounts of data, carry out intricate queries, and respond to important business inquiries are evaluated using the TPC-H benchmark. According to this test, the best performance the most queries per hour is achieved by a non-clustered system running Microsoft SQL Server 2022 on a 64-core 4th Gen AMD EPYC CPU. Performance on the AMD-based system is 14% better than that of a system powered by the most recent 5th Gen Intel Xeon 8592+. To put it another way, the firm can evaluate data considerably more quickly and get business outcomes since there are 14% more inquiries answered in an hour.
The findings shown above pertain to a database with a 10TB size. Furthermore, the TPC-H findings show that AMD EPYC powered systems also provide the best non clustered performance for 3 TB (EPYCWR-869) and 1 TB databases (EPYCWR-865).
In the meanwhile, online transaction processing (OLTP) is measured by the TPC-E benchmark. There are twelve simultaneous transactions of varying kinds and complexity involved. Either online or time- or price-based criteria are used to initiate the transactions.
Once again, while running Microsoft SQL Server 2022, the AMD-based system achieves the greatest performance for a non-clustered server on the TPC-E benchmark. When compared to a system using a 5th generation Intel Xeon CPU, the system built around the 4th generation AMD EPYC processor performs 7% better.
In summary
AMD has shown that its EPYC CPUs provide the greatest performance available on the market when used with Microsoft SQL Server and Oracle Exadata databases. It is not worth taking shortcuts when it comes to organizing and evaluating data that is essential to the continuous operation of your organization.
AMD continues to set the standard in the server industry with the upcoming release of 5th Gen EPYC CPUs. For the highest database workloads, AMD’s unrivaled 4th Gen EPYC CPUs are the obvious option for the finest database performance in the interim.
Read more on Govindhtech.com
#AMDEPYCCPUs#AMDEPYCprocessors#EPYCCPUs#Exadata#IntelCPUs#databaseperformance#AMD#SQLServer#Microsoft#IntelXeon#news#technews#technology#technology news
0 notes
Text
Dynatron Fan A31-DYN AMD EPYC Socket SP3 with 8013 Aluminum PWM for 1U Server 180 Watts Brown Box
Dynatron Fan A31-DYN AMD EPYC Socket SP3 with 8013 Aluminum PWM for 1U Server 180 Watts Brown Box
A31Recommend for AMD EPYC, Socket SP3 with 8013 Aluminum PWM Blower for Heat Exhausting. Active Cooler for 1U Server. Support CPU power up to 180 Watts Heat Dissipation. CPU SupportAMD EPYCCPU SocketSP3Solution1U Server and UpMaterialCopper1100 Heatsink with Vapor Chamber BaseFan Dimension80 x 13 mmSpeedAt Duty Cycle 20%: 1500 +/- 200 RPM At Duty Cycle 50%: 4000 +/- 10% RPM At Duty Cycle 100%:…
View On WordPress
0 notes
Text
At Dual-Socket Systems, Ampere’s 192-Core CPUs Stress ARM64 Linux Kernel
Ampere’s 192-Core CPUs Stress ARM64 Linux Kernel
In the realm of ARM-based server CPUs, the abundance of cores can present unforeseen challenges for Linux operating systems. Ampere, a prominent player in this space, has recently launched its AmpereOne data center CPUs, boasting an impressive 192 cores. However, this surplus of computing power has led to complications in Linux support, especially in systems employing two of Ampere’s 192-core chips (totaling a whopping 384 cores) within a single server.
The Core Conundrum
According to reports from Phoronix, the ARM64 Linux kernel currently struggles to support configurations exceeding 256 cores. In response, Ampere has taken the initiative by proposing a patch aimed at elevating the Linux kernel’s core limit to 512. The proposed solution involves implementing the “CPUMASK_OFFSTACK” method, a mechanism allowing Linux to override the default 256-core limit. This approach strategically allocates free bitmaps for CPU masks from memory, enabling an expansion of the core limit without inflating the kernel image’s memory footprint.
Tackling Technicalities
Implementing the CPUMASK_OFFSTACK method is crucial, given that each core introduces an additional 8KB to the kernel image size. Ampere’s cutting-edge CPUs stand out with the highest core count in the industry, surpassing even AMD’s latest Zen 4c EPYC CPUs, which cap at 128 cores. This unprecedented core count places Ampere in uncharted territory, making it the first CPU manufacturer to grapple with the limitations of ARM64 Linux Kernel 256-core threshold.
The Impact on Data Centers
While the core limit predicament does not affect systems equipped with a single 192-core AmpereOne chip, it poses a significant challenge for data center servers housing two of these powerhouse chips in a dual-socket configuration. Notably, SMT logical cores, or threads, also exceed the 256 figure on various systems, further compounding the complexity of the issue.
AmpereOne: A Revolutionary CPU Lineup
AmpereOne represents a paradigm shift in CPU design, featuring models with core counts ranging from 136 to an astounding 192 cores. Built on the ARMv8.6+ instruction set and leveraging TSMC’s cutting-edge 5nm node, these CPUs boast dual 128b Vector Units, 2MB of L2 cache per core, a 3 GHz clock speed, an eight-channel DDR5 memory controller, 128 PCIe Gen 5 lanes, and a TDP ranging from 200 to 350W. Tailored for high-performance data center workloads that can leverage substantial core counts, AmpereOne is at the forefront of innovation in the CPU landscape.
The Road Ahead
Despite Ampere’s proactive approach in submitting the patch to address the core limit challenge, achieving 512-core support might take some time. In 2021, a similar proposal was put forth, seeking to increase the ARM64 Linux CPU core limit to 512. However, Linux maintainers rejected it due to the absence of available CPU hardware with more than 256 cores at that time. Optimistically, 512-core support may not become a reality until the release of Linux kernel 6.8 in 2024.
A Glimmer of Hope
It’s important to note that the outgoing Linux kernel already supports the CPUMASK_OFFSTACK method for augmenting CPU core count limits. The ball is now in the court of Linux maintainers to decide whether to enable this feature by default, potentially expediting the timeline for achieving the much-needed 512-core support.
In conclusion, Ampere’s 192-core CPUs have thrust the industry into uncharted territory, necessitating innovative solutions to overcome the limitations of current ARM64 Linux kernel support. As technology continues to advance, collaborations between hardware manufacturers and software developers become increasingly pivotal in ensuring seamless compatibility and optimal performance for the next generation of data center systems.
Read more on Govindhtech.com
#DualSocket#ARM64#Ampere’s192Core#CPUs#linuxkernel#AMD’s#Zen4c#EPYCCPUs#DataCenters#DDR5memory#TSMC’s#technews#technology#govindhtech
0 notes
Text
AMD EPYC 9005: 5th Gen AMD EPYC CPU With Zen 5 Design
AMD EPYC 5th Gen Processors
The AMD EPYC 9005 family of processors, designed specifically to speed up workloads in data centers, the cloud, and artificial intelligence, are pushing the boundaries of corporate computing performance.
The world’s top server CPU for cloud, AI, and corporate applications, the 5th Gen AMD EPYC processors, originally codenamed “Turin,” are now available, according to AMD.
The AMD EPYC 9005 Series processors build on the record-breaking performance and energy efficiency of the previous generations with the “Zen 5” core architecture, compatible with the widely used SP5 platform and offering a wide range of core counts from 8 to 192. The top of the stack 192 core CPU can deliver up to 2.7X the performance compared to the competition.
AMD EPYC 9575F
The 64-core AMD EPYC 9575F, a new CPU in the AMD EPYC 9005 Series, is designed specifically for GPU-powered AI applications that need the highest host CPU performance. Boosting up to 5GHz5, it offers up to 28% quicker processing than the competition’s 3.8GHz CPU, which is necessary to keep GPUs loaded with data for demanding AI applications.
The World’s Best CPU for Enterprise, AI and Cloud Workloads
From supporting corporate AI-enablement programs to powering massive cloud-based infrastructures to hosting the most demanding business-critical apps, modern data centers handle a wide range of workloads. For the wide range of server workloads that power corporate IT today, the new 5th Gen AMD EPYC processors provide industry-leading performance and capabilities.
AI, HPC, and business computing get up to 37% and 17% more instructions per clock (IPC) with the new “Zen 5” core design. and cloud applications than “Zen 4.”6.
When comparing AMD EPYC 9965 processor-based servers to Intel Xeon 8592+ CPU-based servers, users may anticipate significant improvements in their real-world workloads and applications, including:
Results in commercial applications like video transcoding may be obtained up to 4 times quicker.
The time to insights for scientific and HPC applications that address the most difficult challenges in the world may be up to 3.9 times faster.
Performance per core in virtualized infrastructure may increase by up to 1.6X.
Whether they are running a CPU or a CPU + GPU system, 5th Gen AMD EPYC processors allow clients to drive quick time to insights and deployments for AI installations in addition to providing leading performance and efficiency in general purpose workloads.
In contrast to the competition:
In order to drive an efficient approach to generative AI, the 192 core EPYC 9965 CPU can perform up to 3.7X better on end-to-end AI workloads, such as TPCx-AI (derivative).
In enterprise-class generative AI models of small and medium sizes, such as Meta Llama 3.1-8B, the EPYC 9965 offers 1.9 times the throughput performance of its competitors.
Lastly, a 1,000 node AI cluster may push up to 700,000 additional inference tokens per second with the aid of the EPYC 9575F, a specially designed AI host node CPU, and its 5GHz maximum frequency increase. doing more tasks more quickly.
Customers can achieve 391,000 units of SPECrate2017_int_base general purpose computing performance by modernizing to a data center powered by these new processors. This allows them to use approximately 87% fewer servers and an estimated 71% less power while still receiving impressive performance for a variety of workloads. This offers CIOs the choice to boost performance for routine IT activities while achieving remarkable AI performance, or they may take advantage of the space and power reductions.
AMD EPYC CPUs: Pioneering the Next Innovation Wave
EPYC CPUs have been widely used to power the most demanding computing operations due to their demonstrated performance and extensive ecosystem support among partners and consumers. AMD EPYC CPUs enable clients rapidly and effectively create value in their data centers and IT environments with their industry-leading performance, features, and density.
Features of the 5th Gen AMD EPYC
With support from Cisco, Dell, Hewlett Packard Enterprise, Lenovo, Supermicro, and all major ODMs and cloud service providers, the whole array of 5th Gen AMD EPYC processors is now available, offering businesses looking to lead in compute and AI a straightforward upgrade path.
The AMD EPYC 9005 series CPUs include the following high-level features:
Options for the leadership core count range from 8 to 192 per CPU.
The main architectures for “Zen 5” and “Zen 5c”
Each CPU has 12 DDR5 memory channels.
Up to DDR5-6400 MT/s of support
Up to 5GHz5 leadership increase frequencies
The whole 512b data route for AVX-512
Every component in the series is undergoing FIPS certification, and trusted I/O for confidential computing
Read more on govindhtech.com
#AMDEPYC9005#5thGen#AMD#EPYC#Zen5Design#AMDEPYC9575F#CloudWorkloads#IntelXeon#generativeAI#AImodels#MetaLlama#EPYCCPU#Lenovo#DDR5memory#govindhtech#news#TechNews#Technology#technologynews#technologytrends
0 notes