#IntelXeonScalableprocessor
Explore tagged Tumblr posts
govindhtech · 3 days ago
Text
Intel vRAN Single-Server Site Fusion with Intel Xeon 6 SoCs
Tumblr media
Multiple server open vRAN sites can be consolidated into a single server footprint thanks to the Intel Xeon 6 SoC.
The continuous telecom transition to software-based RAN designs that are scalable, adaptable, and open is reflected in the constant advancement of open RAN. This year, there have been some significant developments and unmistakable progress made by the sector. Among other things to earlier this year that it will construct Canada’s first 5G virtualized RAN, Open RAN network, which it is currently working on, and Vodafone started rolling out commercial Open RAN in Romania.
For almost ten years, Intel has been steadfastly dedicated to the telecom business. Intel has worked closely with operators to promote virtualization in the 5G network core and, more recently, in radio access networks by using the finest cloud technologies for telecommunications and partnering with other companies. It have long understood that operators want cost-effective, high-performing goods that enable them to achieve their long-term objectives in order to succeed.
Intel vRAN Boost
The key to satisfying operators’ demands for cost-of-ownership and network performance is silicon innovation. Since going on sale earlier this year, AT&T, Telus, Verizon, Vodafone, and other tier-1 operators have committed to deploying 4th Gen Intel Xeon Scalable processors with Intel vRAN Boost, the only vRAN processor in the industry with fully integrated acceleration.
The Intel Xeon 6 SoC, formerly known as Granite Rapids D, will build on this achievement by providing significant improvements in performance and power efficiency. These processors, which will be on the market in 2025, have more than twice as many cores as currently available 4th Gen Xeon processor and include architectural improvements that boost Intel vRAN capacity.Image Credit To Intel
Performance, security, and management are improved by its integrated Intel Ethernet and Intel vRAN Boost acceleration. In 2025, Intel will also introduce a new line of Ethernet E830 Controllers and Network Adapters with features including Precision Time Management (PTM), a maximum data bandwidth of 200 Gbps, and other accurate time synchronization features. When combined with the integrated Intel Ethernet, this new series of Ethernet Controllers and Network Adapters will provide the adaptability required to meet different vRAN connection needs globally.
Most site setups that need two or more servers may now operate on a single vRAN server thanks to the Intel Xeon 6 SoC’s improved design and capacity increase, which will enable network operators drastically decrease their server footprint. When compared to earlier systems that often needed numerous servers per location, this consolidation may drastically reduce the deployment capital expenditure. This advantage is further enhanced by increased performance-per-watt, which lowers continuing operational expenses via energy savings.
AI will also be crucial in helping operators achieve their ambitious energy efficiency targets and RAN business objectives. AI can improve RAN efficiency, lower power consumption, and create new income streams by allowing intelligent network optimization, predictive maintenance, and resource allocation. Intel made it simpler for operators to start integrating AI in RAN earlier this year by providing the Intel vRAN AI Development Kit in early availability. Everyone recently showcased Intel AI models combined with Mavenir’s commercial Open RAN software as a further step toward making AI ubiquitous in RAN.
This most recent CPU is designed to do AI in RAN settings. With Intel Advanced Vector Extensions (AVX) and Intel Advanced Matrix Extensions (AMX), the Intel Xeon 6 SoC offers potent integrated AI acceleration. An improvement in deep learning inference and training performance is made possible by Intel AMX’s capacity to store more data in each core and calculate bigger matrices in a single operation. By processing AI inference workloads without the need for extra hardware, this CPU-based acceleration may reduce latency and maximize power and resource efficiency savings.
With all these characteristics, Intel Xeon 6 SoCs will raise the bar for advanced AI capabilities, compact architecture, and Intel vRAN performance-per-watt. To secure operators’ long-term success, Intel provides a multi-generation roadmap of CPU, Ethernet, and software technologies. In the next months, there will be a lot more to say about Intel Xeon 6 SoCs and the rest of the portfolio.
Read more on Govindhtech.com
0 notes
govindhtech · 30 days ago
Text
Decrease Price of Intel Spark SQL Workloads On Google Cloud
Tumblr media
Reduce Google Cloud Expenses for Intel Spark SQL Workloads: Businesses are trying to take advantage of the abundance of data coming in from gadgets, consumers, websites, and more as artificial intelligence (AI) takes over the news. Innovation is still fueled by big data analytics, which offers vital insights into consumer demographics, AI technology, and new prospects. The decision of when to add or grow your big data analytics is more important than whether you will need to do so.
Intel will be discussing Apache Spark SQL large data analytics applications and how to get the most out of Intel CPUs in the Spark blog series. The Spark SQL‘s value and performance outcomes on Google Cloud instances in this article.
Combining Apache Spark with Google Cloud Instances Powered by the Latest Intel Processors
The robust Apache Spark framework is used by many commercial clients to handle massive amounts of data in the cloud. For instance, in some use cases like processing retail transactions not completing tasks on time may result in service-level agreement (SLA) breaches, which can then result in fines, decreased customer satisfaction, and harm to the company’s image. Businesses can manage additional projects, analyze more data, and meet deadlines by optimizing Apache Spark performance. More resilience and flexibility are the results, since administrators may diagnose and fix any faults without endangering overall performance.
Applications that need to ingest data from IoT sensors or streaming data applications where it is essential to unify data processing across many languages in real time are examples of workloads that often use Apache Spark to ingest data from numerous sources into files or batches. After processing them, it creates a target dataset, which businesses may use to create business intelligence dashboards, provide decision-makers insights, or send data to other parties.
The increased processing capability of a well-designed Spark cluster system, like Google Cloud with N4 5th Generation Intel Xeon instances, makes it possible to stream and analyze massive amounts of data efficiently. This enables businesses to promptly distribute the processed data to suppliers or dependent systems.
Businesses may increase the efficacy and economy of AI workloads, particularly in the data pretreatment phases, by combining open-source Spark with Intel Xeon 5th Gen CPUs. Large datasets may be prepared for AI models in less time because to Spark’s ability to execute complicated ETL (“extract, transform, and load”) processes more quickly and effectively thanks to the most recent Intel CPUs.
This allows businesses to optimize resource consumption, which reduces costs, and shortens the AI development cycle. The combination of Spark with the newest Intel CPUs provides crucial scalability for AI applications that use big and complicated datasets, such those in real-time analytics or deep learning. Businesses can quickly and accurately use AI models and get real-time insights that facilitate data-driven, efficient decision-making.
Google Cloud Offerings
When transferring your Spark SQL workloads to the cloud, Google Cloud provides a variety of service alternatives, ranging from managed Spark services to infrastructure-as-a-service (IaaS) instances. Look at Google Cloud-managed services for serverless, integrated Spark setups. However, the IaaS option is the best choice for workloads where you want to build, expand, administer, and have greater control over your own Spark environment.
Google Cloud provides a wide variety of instance families that are categorized based on workload resource requirements. General-purpose, storage-optimized, compute-optimized, memory-optimized, and accelerator-optimized are some of these types. As their names suggest, the examples in these categories include GPUs to satisfy diverse task requirements, improved storage performance, and varying memory to CPU core ratios.
Furthermore, inside instance families, you may choose between various vCPU-to-memory ratios using instances of the “highcpu” or “highmem” categories. Large databases, memory-intensive workloads like Spark, and large-scale data transformations are better suited for high memory instance types, which enhance performance and execution durations.
In order to satisfy different performance and capacity needs, Google Cloud offers a range of block storage choices that strike the ideal balance between price and performance. For instance, SSD solutions that are locally connected provide superior performance, whereas Standard Persistent Discs are a viable option for low-cost, standard performance requirements. Google Cloud provides design guidelines, price calculators, comparison guides, and more to assist you in selecting the best alternatives for your workload.
Because Spark SQL requires a lot of memory, it chose to test on general-purpose “highmem” Google Cloud instances. But it wasn’t the end of our decision-making process. In addition, users choose the series they want to utilize within the instance family and an instance size. Although older instance series with older CPUs are often less expensive, employing legacy hardware may result in performance issues. Additionally, you have a choice between CPU manufacturers such as AMD and Intel. Google provides instances of the N-, C-, E-, and T-series in the general-purpose family.
The N-series is recommended for applications such as batch processing, medium-traffic web apps, and virtual desktops. The C-series is ideal for workloads like network appliances, gaming servers, and high-traffic web applications since it offers greater CPU frequencies and network restrictions. E-series instances are used for development, low-traffic web servers, and background operations. Lastly, the T-series are excellent for workloads including scale-out and media transcoding.
Let’s now examine the tests to be conducted on the N4 instance, which has 5th Gen Intel Xeon Scalable processors; an earlier N2 instance, which has 3rd Gen Xeon Scalable processors from Previous Generation Xeon; and an N2D instance, which has AMD processors from the N series. Additionally, it tested a C3 instance with AMD C series processors and a C3D instance with 4th Gen Intel Xeon Scalable CPUs.
Performance Overview
The performance statistics to collected and compared the different instance kinds and families it evaluated are examined in this part.
Generation Over Generation
To demonstrate how your decisions might affect the performance and value of your workload, it will first examine just the examples that use Intel Xeon Scalable processors. Google cloud used a TPC-DS-based benchmark that simulates a general-purpose DSS with 99 distinct database queries. Intel compared the Spark SQL instance clusters to see how long it took a single user to execute all 99 queries once. When it evaluated the 80 vCPU instances, the N4-highmem-80 instances with 5th Gen Intel Xeon Scalable processors completed the task 1.13 times faster and had 1.15 times the performance per dollar compared to the N2-highmem-80 instances with older 3rd Gen Intel Xeon Scalable processors.
The N4-highmem-80 instances were 1.18 times faster to finish the queries with a commanding 1.38 times the performance per dollar when compared to the C3-highmem-88 instance with 4th Gen Intel Xeon Scalable CPUs. Intel selected the closest size with 88 vCPUs since the C3 series does not support an instance size of 80 vCPU.
These findings demonstrate that purchasing more recent instances with more recent Intel CPUs not only improves Spark SQL performance but also offers greater value. The performance of N4 instances is up to 1.38 times better than that of earlier instances for every dollar spent.
Competitive
It can now evaluate the N-series instances with AMD processors after comparing them to previous instances that use 5th Gen Intel Xeon Scalable CPUs. It’ll start by comparing older N2D series computers, which may include AMD EPYC CPUs from the second or third generation. With 1.19 times the performance per dollar, the N4 instance with Intel CPUs completed the queries 1.30 times faster than the N2D instance.
Lastly, it contrasted the C3D instance with 4th Gen AMD EPYC processors with the N4 instance with 5th Gen Intel Xeon Scalable processors. Because there isn’t an instance with 80 vCPUs in the C3D series, Intel chose the closest choice, which has 90 vCPUs, which gives the C3D instance a little edge. The research indicates that the N4 instance achieved 1.21 times the performance per dollar, but with a somewhat lower performance, even with less vCPUs.
According to our findings, for Spark SQL workloads, Google Cloud instances with the newest Intel processors may provide the highest performance and value when compared to instances with AMD processors or older Intel instances.
In conclusion
A potent technique to maximize workloads, improve performance, and save operating costs is to integrate Apache Spark with more recent Google Cloud instances that include Intel Xeon 5th Gen CPUs. The findings demonstrate that, despite their higher cost, these more recent examples might provide much superior value. For your Spark SQL applications, instances with the newest 5th Gen Intel Xeon Scalable processors are the logical option since they can provide up to 1.38 times the performance per dollar.
Read more on Govindhtech.com
0 notes
govindhtech · 1 month ago
Text
Gluten And Intel CPUs Boost Apache Spark SQL Performance
Tumblr media
The performance of Spark may be improved by using Intel CPUs and Gluten.
The tools and platforms that businesses use to evaluate the ever-increasing amounts of data that are coming in from devices, consumers, websites, and more are more crucial than ever. Efficiency and performance are crucial as big data analytics provides insights that are both business- and time-critical.
Workloads involving large data analytics on Apache Spark SQL often run constantly, necessitating excellent performance to accelerate time to insight. This implies that businesses may defend paying a bit more overall in order to get greater results for every dollar invested. It looked at Spark SQL performance on Google Cloud instances in the last blog.
Spark Enables Scalable Data Science
Apache Spark is widely used by businesses for large-scale SQL, machine learning and other AI applications, and batch and stream processing. To enable data science at scale, Spark employs a distributed paradigm; data is spread across many computers in clusters. Finding the data for every given query requires some overhead due to this dispersion. A key component of every Spark workload is query speed, which leads to quicker business decisions. This is particularly true for workloads including machine learning training.
Utilizing Gluten to Quicken the Spark
Although Spark is a useful tool for expediting and streamlining massive data processing, businesses have been creating solutions to improve it. Intel’s Optimized Analytics Package (OAP) Spark-SQL execution engine, Gluten, is one such endeavor that reduces computation-intensive vital data processing and transfers it to native accelerator libraries.
Gluten uses a vectorized SQL processing engine called Velox (Meta’s open-source) C++ generic database acceleration toolkit to improve data processing systems and query engines. A Spark plugin called Gluten serves as “a middle layer responsible for offloading the execution of JVM-based SQL engines to native engines.” The Apache Gluten plugin with Intel processor accelerators allow users to significantly increase the performance of their Spark applications.
It functions by converting the execution plans of Spark queries into Substrait, a cross-language data processing standard, and then sending the now-readable plans to native libraries via a JNI call. The execution plan is constructed, loaded, and handled effectively by the native engine (which also manages native memory allocation) before being sent back to Gluten as a Columnar Batch. The data is then sent back to Spark JVM as ArrowColumnarBatch by Gluten.
Gluten employs a shim layer to support different Spark versions and a fallback technique to execute vanilla Spark to handle unsupported operators. It captures native engine metrics and shows them in the Spark user interface.
While outsourcing as many compute-intensive data processing components to native code as feasible, the Gluten plugin makes use of Spark’s own architecture, control flow, and JVM code. Existing data frame APIs and applications will function as previously, although more quickly, since it doesn’t need any modifications on the query end.
Enhancements in Performance Was Observed
This section examines test findings that show how performance may be enhanced by using Gluten in your Spark applications. One uses 99 distinct database queries to construct a general-purpose decision support system based on TPC-DS. The other, which is based on TPC-H, uses ten distinct database queries to simulate a general-purpose decision support system. Everyone compared the time it took for a single user to finish each query once within the Spark SQL cluster for both.
Fourth Generation Intel Xeon Scalable Processors
Help start by examining how adding Gluten to Spark SQL on servers with 4th Generation Intel Xeon Scalable Processors affects performance. The performance increased by 3.12 times when it was added, as the chart below illustrates. The accelerator enabled the system to execute the 10 database queries over three times faster on the TPC-H-like workload. Gluten more than quadrupled the pace at which all 99 database queries were completed on the workload that resembled TCP-DS. Because of these enhancements, decision-makers would get answers more quickly, proving the benefit of incorporating Gluten into your Spark SQL operations.
Fifth Generation Intel Xeon Scalable Processors
Let’s now investigate how Gluten speeds up Spark SQL applications on servers equipped with Intel Xeon Scalable Processors of the Fifth Generation. With speed up to 3.34 times as high while utilizing Gluten, you saw even bigger increases than they experienced on the servers with older CPUs, as the accompanying chart illustrates. Incorporating Gluten into your environment will help you get more out of your technology and reduce time to insight if your data center has servers of this generation.
Cloud Implications
Even though they ran these tests in a data center using bare metal hardware, they amply illustrate how Gluten may boost performance even in the cloud. Using Spark in the cloud may allow you to take advantage of further performance enhancements by using Gluten.
In conclusion
Rapid analysis completion is essential to the success of your business, regardless of whether your Spark SQL workloads are running on servers with 5th version Intel Xeon Scalable Processors or the older version. By shifting JVM data processing to native libraries, Gluten may benefit from the speed improvement that Intel processors can provide with native libraries that are optimized to instruction sets.
According to these tests, you may easily double or even treble the speed at which your servers execute database queries by integrating the Gluten plugin into Spark SQL workloads. Using Gluten may help your company optimize data analytics workloads by offering up to 3.34x the performance.
Read more on Govindhtech.com
0 notes
govindhtech · 2 months ago
Text
5th Gen Intel Xeon Scalable Processors Boost SQL Server 2022
Tumblr media
5th Gen Intel Xeon Scalable Processors
While speed and scalability have always been essential to databases, contemporary databases also need to serve AI and ML applications at higher performance levels. Real-time decision-making, which is now far more widespread, should be made possible by databases together with increasingly faster searches. Databases and the infrastructure that powers them are usually the first business goals that need to be modernized in order to support analytics. The substantial speed benefits of utilizing 5th Gen Intel Xeon Scalable Processors to run SQL Server 2022 will be demonstrated in this post.
OLTP/OLAP Performance Improvements with 5th gen Intel Xeon Scalable processors
The HammerDB benchmark uses New Orders per minute (NOPM) throughput to quantify OLTP. Figure 1 illustrates performance gains of up to 48.1% NOPM Online Analytical Processing when comparing 5th Gen Intel Xeon processors to 4th Gen Intel Xeon processors, while displays up to 50.6% faster queries.
The enhanced CPU efficiency of the 5th gen Intel Xeon processors, demonstrated by its 83% OLTP and 75% OLAP utilization, is another advantage. When compared to the 5th generation of Intel Xeon processors, the prior generation requires 16% more CPU resources for the OLTP workload and 13% more for the OLAP workload.
The Value of Faster Backups
Faster backups improve uptime, simplify data administration, and enhance security, among other things. Up to 2.72x and 3.42 quicker backups for idle and peak loads, respectively, are possible when running SQL Server 2022 Enterprise Edition on an Intel Xeon Platinum processor when using Intel QAT.
The reason for the longest Intel QAT values for 5th Gen Intel Xeon Scalable Processors is because the Gold version includes less backup cores than the Platinum model, which provides some perspective for the comparisons.
With an emphasis on attaining near-real-time latencies, optimizing query speed, and delivering the full potential of scalable warehouse systems, SQL Server 2022 offers a number of new features. It’s even better when it runs on 5th gen Intel Xeon Processors.
Solution snapshot for SQL Server 2022 running on 4th generation Intel Xeon Scalable CPUs. performance, security, and current data platform that lead the industry.
SQL Server 2022
The performance and dependability of 5th Gen Intel Xeon Scalable Processors, which are well known, can greatly increase your SQL Server 2022 database.
The following tutorial will examine crucial elements and tactics to maximize your setup:
Hardware Points to Consider
Choose a processor: Choose Intel Xeon with many cores and fast clock speeds. Choose models with Intel Turbo Boost and Intel Hyper-Threading Technology for greater performance.
Memory: Have enough RAM for your database size and workload. Sufficient RAM enhances query performance and lowers disk I/O.
Storage: To reduce I/O bottlenecks, choose high-performance storage options like SSDs or fast HDDs with RAID setups.
Modification of Software
Database Design: Make sure your query execution plans, indexes, and database schema are optimized. To guarantee effective data access, evaluate and improve your design on a regular basis.
Configuration Settings: Match your workload and hardware capabilities with the SQL Server 2022 configuration options, such as maximum worker threads, maximum server RAM, and I/O priority.
Query tuning: To find performance bottlenecks and improve queries, use programs like Management Studio or SQL Server Profiler. Think about methods such as parameterization, indexing, and query hints.
Features Exclusive to Intel
Use Intel Turbo Boost Technology to dynamically raise clock speeds for high-demanding tasks.
With Intel Hyper-Threading Technology, you may run many threads on a single core, which improves performance.
Intel QuickAssist Technology (QAT): Enhance database performance by speeding up encryption and compression/decompression operations.
Optimization of Workload
Workload balancing: To prevent resource congestion, divide workloads among several instances or servers.
Partitioning: To improve efficiency and management, split up huge tables into smaller sections.
Indexing: To expedite the retrieval of data, create the proper indexes. Columnstore indexes are a good option for workloads involving analysis.
Observation and Adjustment
Performance monitoring: Track key performance indicators (KPIs) and pinpoint areas for improvement with tools like SQL Server Performance Monitor.
Frequent Tuning: Keep an eye on and adjust your database on a regular basis to accommodate shifting hardware requirements and workloads.
SQL Server 2022 Pricing
SQL Server 2022 cost depends on edition and licensing model. SQL Server 2022 has three main editions:
SQL Server 2022 Standard
Description: For small to medium organizations with minimal database functions for data and application management.
Licensing
Cost per core: ~$3,586.
Server + CAL (Client Access License): ~$931 per server, ~$209 per CAL.
Basic data management, analytics, reporting, integration, and little virtualization.
SQL Server 2022 Enterprise
Designed for large companies with significant workloads, extensive features, and scalability and performance needs.
Licensing
Cost per core: ~$13,748.
High-availability, in-memory performance, business intelligence, machine learning, and infinite virtualization.
SQL Server 2022 Express
Use: Free, lightweight edition for tiny applications, learning, and testing.
License: Free.
Features: Basic capability, 10 GB databases, restricted memory and CPU.
Models for licensing
Per Core: Recommended for big, high-demand situations with processor core-based licensing.
Server + CAL (Client Access License): For smaller environments, each server needs a license and each connecting user/device needs a CAL.
In brief
Faster databases can help firms meet their technical and business objectives because they are the main engines for analytics and transactions. Greater business continuity may result from those databases’ faster backups.
Read more on govindhtech.com
0 notes
govindhtech · 2 months ago
Text
EC2 I4i Instances: Increasing Efficiency And Saving Money
Tumblr media
Efficient and economical search capabilities are essential for developers and organizations alike in today’s data-driven environment. The underlying infrastructure may have a big influence on costs and performance, whether it’s used for real-time search functions or complicated searches on big databases.
Businesses need to strike a compromise between budgetary limitations and performance, data scientists need to get data efficiently for their models, and developers need to make sure their apps are both dependable and speedy.
For applications requiring a lot of storage, Amazon EC2 I3 instances powered by Intel Xeon Scalable Processors and I4i instances powered by 3rd Gen Intel Xeon Scalable processors offer a solid mix of computation, memory, network, and storage capabilities. Cloud architects and clients may choose the best option that balances cost and performance by contrasting these two storage-optimized instance types.
Boosting Throughput and Efficiency with OpenSearch
Developers, data scientists, and companies looking for robust search and analytics capabilities are fond of OpenSearch, an open-source search and analytics package. It is a flexible tool because of its sophisticated search features, strong analytics, and capacity for handling massive data volumes with horizontal scalability. Many firms use OpenSearch because it provides transparency, flexibility, and independence from vendor lock-in.
It can chose to thoroughly examine the OpenSearch histogram aggregation speed and cost for AWS’s storage-optimized I3 instances and I4i instances due to its widespread usage. Professionals from a variety of backgrounds who want to maximize productivity and minimize expenses in OpenSearch implementations must comprehend the distinctions between these cases.
I4i instances powered by 3rd generation Intel Xeon Scalable processors provide:
Quicker memory
Greater cache size
Improved IPC performance brought about by new architecture and processes
Testing AWS Instances Powered by Intel
Using the OpenSearch Benchmark tool, Intel tested the assessed instances’ cost-effectiveness and performance, paying particular attention to two important performance metrics:
Histogram aggregation throughput: The quantity of operations per second that reveal how well the instances can manage big amounts of data.
Resource utilization: Evaluates how well CPU, memory and storage are used; this affects scalability and total cost.
Intel utilized data from yellow cab trips in New York City in 2015 (from the nyc_taxis workload) to assess the instances’ performance in managing demanding search and aggregation operations. With 165 million documents and 75 GB in total, this dataset offered a significant and realistic test situation.
It used Amazon Web Services (AWS) cloud storage-optimized (I) instance types for to the investigation. To oversee the activities, the cluster was set up with three data nodes, one coordinating node, and one cluster management node. To create the workload, a different client node was configured with the benchmark application was taken from the OpenSearch benchmark repository.
It set the heap size of the Java Virtual Machine (JVM) to 50% of the RAM that is available on each node in order to maximize Java performance. To better fit OpenSearch’s I/O patterns, it also changed the flush Tran slog threshold size from the default 512 MB to a fourth of the heap size. In order to facilitate more effective indexing operations, the index buffer size was also raised from its default value of 10% to 25% of the Java heap size.
Finding the best AWS instance type for OpenSearch jobs was the main objective, with an emphasis on both affordability and raw performance. To isolate the effects of the instance types on performance, the benchmark tests were conducted in a controlled environment with consistent storage and networking characteristics. The performance-per-dollar measure was computed using the related expenses from the AWS area where all instances were installed, which was also the same region utilized for on-demand instances.
Results for Cost-Effectiveness and Performance
While the I4i instances use the more sophisticated 3rd Gen Intel Xeon Scalable CPUs, the I3 instances are powered by Intel Xeon Scalable CPUs. One of the main components of AWS comparison study across the three instance sizes 2xlarge, 4xlarge, and 8xlarge is this difference in processing power.
They standardized the throughput data, using the I3 instances as a baseline for each size, in order to quantify the performance differences across the instance types. They were able to quantify the i4i series’ relative performance enhancements in a straightforward and consistent way thanks to this method.
It discovered that I4i instances, equipped with their 3rd generation Intel Xeon Scalable processors, produced a throughput that was around 1.8 times higher than that of the I3 instances in all cases. This translates to a generation-over-generation improvement in OpenSearch aggregate search throughput of up to 85%.
Intel observed that the I4i machines allowed for almost 60% more queries per dollar spent on average than the earlier I3 instances, in addition to a notable speed benefit. For businesses trying to efficiently control their cloud expenditures, this is a major benefit.
AWS I4i instances
When compared to I3 instances, AWS I4i instances, which are based on 3rd Gen Intel Xeon Scalable processors, provide a more potent mix of value and performance, as well as superior performance. The more recent I4i instance is clearly the better option for enterprises seeking to maximize their OpenSearch installations, grow their business, and service more clients without incurring additional expenses. The Amazon OpenSearch service offers both of the instances covered in this article.
Read more on govindhtech.com
0 notes
govindhtech · 3 months ago
Text
Dell PowerEdge HS5620 System’s Cooling Design Advantages 
Tumblr media
Cloud Scale PowerEdge HS5620 Server
Open-source, optimised, and simplified: To reduce additional expenses and overhead, the 2U, 2 socket Dell PowerEdge HS5620 offers customised configurations that grow with ease.
Specifically designed for you: The most widely used IT applications from cloud service providers are optimised for the Dell PowerEdge HS5620, allowing for a quicker time to market.
Optimisation without the cost: With this scalable server, technology optimisation is provided without the added cost and hassle of maintaining extreme settings.
You gain simplicity for large-scale, heterogeneous SaaS, PaaS, and IaaS datacenters with customisation performance, I/O flexibility, and open ecosystem system management.
Perfect for cloud native storage intensive workloads, SDS node virtualisation, and medium VM density
Image Credit To Dell
For quicker and more precise processing, add up to two 5th generation Intel Xeon Scalable processors with up to 32 cores.
Utilise up to 16 DDR5 RDIMMS to speed up in-memory workloads at 5600 MT/sec.
Options for storing include:
Eight x 2.5 NVMe are possible.
12 × 3.5 SAS/SATA maximum
16 x 2.5 SAS/SATA maximum
Open Server Manager, which is based on OpenBMC, and iDRAC are two solutions for embedded system management.
Choose from a large assortment of  SSDs and COMM cards with verified vendor firmware to save time.
PowerEdge HS5620
 Cloud service providers are the target market for open platform, cloud-scale servers.
Open, optimised, and simplified
The newest Dell PowerEdge HS5620 is a 2U, two-socket rack server designed specifically for the most widely used IT applications by cloud service providers. With this scalable server, technology optimisation is provided without the added cost and hassle of maintaining extreme settings. You gain simplicity for large-scale, heterogeneous SaaS, PaaS, and IaaS datacenters with customisable performance, I/O flexibility, and open ecosystem system management.
Crafted to accommodate your workloads
Efficient performance with a maximum of two 32-core 5th generation and 32-core 4th generation Intel Xeon Scalable processors per socket.
Use up to 16 DDR5 RDIMMS to speed up in-memory applications up to 5200 MT/sec.
Support heavy storage workloads.
Personalised to Meet Your Needs
Scalable configurations.
Workloads validated to reduce additional expenses and overhead.
Dell Open Server Manager, which is based on OpenBMC, offers an option for open ecosystem administration.
Choose from a large assortment of SSDs and COMM cards with verified vendor firmware to save time.
Cyber Resilient Design for Zero Trust Operations & IT Environment
Every stage of the PowerEdge lifecycle, from the factory-to-site integrity assurance and protected supply chain, incorporates security. End-to-end boot resilience is anchored by a silicon-based root of trust, and trustworthy operations are ensured by role-based access controls and multi-factor authentication (MFA).
Boost productivity and expedite processes through self-governing cooperation
For PowerEdge servers, the Dell OpenManage systems management portfolio offers a complete, effective, and safe solution. Using iDRAC and the OpenManage Enterprise console, streamline, automate, and centralise one-to-many management. For open ecosystem system management, the HS5620 provides Open Server Manager, which is based on OpenBMC.
Durability
The PowerEdge portfolio is made to manufacture, deliver, and recycle items to help cut your operating expenses and lessen your carbon impact. This includes recycled materials in their products and packaging as well as smart, inventive alternatives for energy efficiency. With Dell Technologies Services, they even simplify the responsible retirement of outdated systems.
With Dell Technologies Services, you can sleep easier
Optimise your PowerEdge servers with a wide range of services, supported by their 60K+ employees and partners, available across 170 locations, including consulting, data migration, the ProDeploy and ProSupport suites, and more. Cloud Scale Servers are only available to a limited number of clients under the Hyperscale Next initiative.
An in-depth examination of the benefits of the Dell PowerEdge HS5620 system cooling design
Understanding the systems’ performance in each test situation requires analysing their thermal designs. Servers use a variety of design components, such as motherboard design, to keep computers cool. Sensitive components can be prevented from overheating one another by being positioned on the motherboard. Fans also help to maintain airflow, and a well-designed chassis should shield components from hot air. They look at these design components in the Supermicro SYS-621C-TN12R and Dell PowerEdge HS5620 servers below.
The Supermicro SYS-621C-TN12 motherboard configuration that they examined. They also added component labels and arrows that display the direction of the airflow from the fans; blues and purples represent colder air, and reds, oranges, and yellows represent hotter air.
Motherboard layout
The positioning of the M.2 NVMe modules on the Supermicro system’s motherboard presented special challenges. For instance, because the idle  SSD was situated immediately downstream of a processor that was under load in the second and third test situations, its temperature climbed as well. Furthermore, the power distribution module (PDU) connecting the two PSUs to the rest of the system did not have a dedicated fan on the right side of the chassis. The Supermicro design, on the other hand, depended on ventilation from the fans integrated into the PSUs at the chassis’ rear.
The BMC recorded a PSU failure during the second fan failure scenario, despite the fact that they did not see a PDU failure, highlighting the disadvantage of this design. On the other hand, the Dell PowerEdge HS5620 motherboard had a more complex architecture. Heat pipes on the heat sinks were employed by processor cooling modules to enable more efficient cooling. Because the PDU was built into the motherboard, the components’ ventilation was improved. Both a Dell HPR Gold and a Dell HPR Silver fan were used in the setup they tested to cool the parts of the PDU.
Summary
Stay cool under pressure with the Dell PowerEdge HS5620 to boost productivity. Elevating the temperature of your data centre can significantly improve energy efficiency and reduce cooling expenses for your company. With servers built to withstand both elevated ambient temperatures and high temperatures brought on by unanticipated events, your company can keep providing the performance that your clients and apps demand.
A Dell PowerEdge HS5620 and a Supermicro SYS-621CTN12R were subjected to an intense floating-point workload in three different scenario types. These scenarios included a fan failure, an HVAC malfunction, and regular operations at 25°C. The Dell server did not encounter any component warnings or failures.
On the other hand, in the last two tests, the Supermicro server had component failures and warnings in all three scenario types, which made the system unusable. After closely examining and comparing each system, they concluded that the motherboard architecture, fans, and chassis of the Dell PowerEdge HS5620 server had advantages for cooling design.
In terms of server cooling design and enterprises seeking to satisfy sustainability goals by operating hotter data centres, the Dell PowerEdge HS5620 is a competitive option that can withstand greater temperatures during regular operations and unplanned breakdowns.
Read more on govindhtech.com
1 note · View note
govindhtech · 6 months ago
Text
New Amazon EC2 C7i & C7i-flex Instances: Power & Flexibility
Tumblr media
Amazon EC2 C7i and C7i-flex instances
Utilizing 4th Generation Intel Xeon Scalable Processors, compute-optimized instances The C7i-flex and EC2 C7i instances on Amazon Elastic Compute Cloud (Amazon EC2) are next-generation compute-optimized instances with a 2:1 RAM to vCPU ratio. They are powered by specialized 4th Generation Intel Xeon Scalable processors (code called Sapphire Rapids).
These unique processor-powered EC2 instances are exclusive to AWS and deliver up to 15% greater performance than equivalent Intel processors used by other cloud providers. They are the best performing comparable Intel processors in the cloud.
For most compute-intensive tasks, the simplest approach to obtain price performance gains is with C7i-flex instances. When compared to C6i instances, they provide price performance that is up to 19% better. C7i-flex instances are an excellent initial option for applications that don’t use all of the computing resources because they come in the most common sizes, ranging from large to 8xlarge, with up to 32 vCPUs and 64 GiB of RAM.
The most popular compute-intensive workloads, such as web and application servers, databases, caches, Apache Kafka, and Elasticsearch, may be effortlessly operated on C7i-flex instances.
For workloads requiring bigger instance sizes (up to 192 vCPUs and 384 GiB memory) or persistently high CPU utilization, EC2 C7i instances offer advantages in terms of pricing and performance. Batch processing, distributed analytics, high performance computing (HPC), ad serving, highly scalable multiplayer gaming, and video encoding are among the workloads that EC2 C7i instances excel at. In comparison to C6i instances, C7i instances offer up to 15% better pricing performance.
Utilizing new Amazon EC2 Flex instances, reduce costs
A lot of users don’t use an EC2 instance’s entire compute capacity. As a result, those clients are paying for services that they do not require. For most compute-intensive tasks, the simplest option to obtain enhanced pricing performance is with Amazon EC2 C7i-flex instances. The majority of the time, Amazon EC2 Flex instances can scale up to full computing performance, making optimal use of compute resources. The goal of flex instances is to maximize both performance and cost.
Advantages
Reduced expenses
For most compute-intensive tasks, the simplest approach to optimize costs is with C7i-flex instances. When compared to C6i instances, they provide price performance that is up to 19% better. In terms of price performance, EC2 C7i instances outperform C6i instances by 15%. Additional larger instance sizes offered by C7i allow for consolidation and the execution of workloads that are more demanding and greater in size.
Adaptability and discretion
The most extensive and varied range of EC2 instances available on AWS is now enhanced by C7i-flex and EC2 C7i instances. Large to 8xlarge are the five most popular sizes offered by C7i-flex. Eleven sizes (two bare-metal sizes, c7i.metal-24xl and c7i.metal-48xl) with different vCPU, memory, networking, and storage capacities are offered by C7i.
Optimum efficiency using resources
Built on the AWS Nitro System, which consists of a lightweight hypervisor combined with specialized hardware, are the C7i-flex and EC2 C7i instances. Nitro improves overall performance and security by providing your instances with nearly all of the host hardware’s computation and memory resources. When it comes to workloads, EC2 instances based on the Nitro System can give throughput performance that is over 15% higher than other cloud providers using the same CPU.
Features
Driven by Intel Xeon Scalable Processors of the 4th Generation
Custom 4th Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz (max core turbo frequency of 3.8 GHz) power the C7i-flex and EC2 C7i instances. These specialized CPUs provide the best performance among similar Intel processors and are exclusively offered on AWS. Support for Intel Total Memory Encryption (TME) always-on memory encryption is included in both cases.
Superior performance interfaces
Compared to C6i instances, DDR5 memory, which is used by C7i-flex and EC2 C7i instances, offers more bandwidth. Up to 12.5 Gbps of networking bandwidth and up to 10 Gbps of bandwidth for Amazon Elastic Block Store (Amazon EBS) are supported by C7i-flex instances.
Up to 50 Gbps of networking bandwidth and 40 Gbps of bandwidth to Amazon EBS are supported by EC2 C7i instances. Furthermore, EC2 C7i allows you to attach up to 128 EBS volumes to an instance, as opposed to C6i’s limit of 28 EBS volume attachments. Elastic Fabric Adapter (EFA) in the metal-48xl and 48xlarge sizes is also supported by EC2 C7i instances.
Fresh accelerators
There are four new integrated accelerators in Intel Xeon Scalable processors of the 4th generation. For applications like CPU-based machine learning, Advance Matrix Extensions (AMX), which are available on both C7i-flex and EC2 C7i instances, speed up matrix multiplication operations.
Available exclusively on EC2 C7i bare metal sizes, Data Streaming Accelerator (DSA), In-Memory Analytics Accelerator (IAA), and Quick Assist Technology (QAT) facilitate effective data offloading and acceleration, enhancing performance for databases, encryption and compression, and queue management workloads.
Constructed Using the Nitro Framework
The AWS Nitro System may be put together in a variety of ways, giving AWS the flexibility to quickly and flexibly construct EC2 instance types with an ever-expanding range of networking, storage, compute, and memory capabilities. The overall performance of the system is improved by Nitro Cards, which offload and speed I/O for functions.
The great majority of apps do not constantly operate at 100% CPU utilization. Consider a web application as an example. It seldom ever uses a server’s compute at full capacity, usually alternating between times of high and low demand.Image credit to AWS
Using the Amazon EC2 M7i-flex instances, first introduced in August, is one simple and affordable approach to run such workloads. With the extra benefit of providing you with better price/performance if you don’t always need full compute capacity, these less expensive versions of the Amazon EC2 M7i instances offer the same next-generation specs for general purpose computing in the most popular sizes. They are therefore an excellent first option if you want to lower your operating costs without sacrificing performance standards.
Because customers responded so favourably to this flexibility, they are now providing Amazon EC2 C7i-flex instances, which offer comparable price/performance benefits as well as lower prices for applications that require a lot of computation. These are less expensive versions of the Amazon EC2 C7i instances that provide a minimum CPU performance with a 95% capacity to scale up to the maximum compute performance.
Which is better, C7i-flex or C7i?
The compute-optimized C7i-flex and C7i instances are driven by exclusive 4th Generation Intel Xeon Scalable processors that are exclusively offered by Amazon Web Services (AWS). Compared to comparable x86-based Intel CPUs utilized by other cloud providers, they offer up to 15% higher performance.
They are perfect for running applications including web and application servers, databases, caches, Apache Kafka, and Elasticsearch. They both use DDR5 memory and have a 2:1 memory to vCPU ratio.
Why then would you choose to use one over the other? Here are three factors to take into account while choosing the best option for you.
Pattern of usage
When you don’t need to use all of the computational resources, EC2 flex instances are an excellent choice.
An efficient use of compute resources can result in five percent lower pricing and five percent better price performance. For compute-intensive workloads, C7i-flex instances should be the first option because they are generally a fantastic fit for the majority of applications.
Instead, you ought to use EC2 C7i instances if your application necessitates constant high CPU consumption. Workloads include batch processing, distributed analytics, ad serving, high performance computing (HPC), highly scalable multiplayer gaming, and video encoding are perhaps better suited for them.
Sizes of instances
C7i-flex instances come in the most popular sizes, up to a maximum of 8xlarge, and are utilized by most workloads.
The huge C7i instances, which come in 12xlarge, 16xlarge, 24xlarge, 48xlarge, and two bare metal alternatives with metal-24xl and metal-48xl sizes, are worth looking into if you require greater specs.
Bandwidth on a network
Depending on your needs, you might need to use one of the larger C7i instances because larger capacities also offer higher network and Amazon Elastic Block Store (Amazon EBS) bandwidths. With a network bandwidth of up to 12.5 Gbps and an Amazon Elastic Block Store (Amazon EBS) capacity of up to 10 Gbps, C7i-flex instances should be adequate for the majority of workloads.
Knowledgeable Regions: To find out whether C7i-flex instances are offered in the regions of your choice, see AWS Services by Region.
Purchase options: On-Demand, Savings Plan, Reserved Instance, and Spot form are available for C7i-Flex and EC2 C7i instances. Additionally, dedicated hosts and dedicated instances for C7i are offered.
Read more on govindhtech.com
0 notes