#TechnologyTrends Intel
Explore tagged Tumblr posts
Text
Agilex 5 E-Series with Power-Optimized Edge Performance
Intel Agilex 5 FGPA
Agilex 5 E-Series
Altera’s latest mid-range FPGAs, the Agilex 5 FPGAs E-Series, are now supported by the recently released Quartus Prime Software v24.1, which can be downloaded right now. Intel are happy to announce that it is now simpler than ever and completely free of charge to take use of the unmatched capability of Altera’s Agilex 5 FPGAs E-Series with the introduction of the state-of-the-art Quartus Prime Software from Altera.
Intel Agilex 5
Free Licence: Get rid of obstacles. With the help of Quartus Prime Pro Edition Software v24.1, you may use the newest E-Series devices at no cost, enabling you to innovate beyond limits!
Streamlined Design Flow: Use Quartus Prime Software to see the smooth integration of intellectual property (IP)-Centric design flow. Their easily customizable design samples streamline the process of getting started so you can concentrate on what really matters your innovative ideas.
New Embedded Processing Capabilities: Make use of the Simics simulator-supported dual-core ARM Cortex-A76 and dual-core ARM Cortex-A55 of the Agilex 5 SoC FPGA, the industry’s first asymmetric processing complex. Additionally, Agilex 5 FPGAs may be combined with the feature-rich, performance- and space-optimized Nios V soft-processor for smaller embedded FPGA applications. Additionally, they collaborate with a number of partners who provide a top-notch suite of tools to improve your FPGA and embedded development experience, including Arm, Wind River, Siemens, Ashling, MathWorks, and many more.
Comprehensive Intellectual Property (IP) Portfolio: With their tried-and-true IP portfolio for Agilex 5 FPGAs, many of which are free, you may shorten the time it takes to market. Reduce the amount of circuitry used and make design timing closure easier with hard IP solutions for PCI Express, Ethernet, and memory protocols, which also support LPDDR5. With PCS’s Ethernet 10G MAC, you can guarantee deterministic and synchronised communication, enhanced by Time-Sensitive Networking (TSN) features.
This version includes the Video and Vision Processing (VVP) portfolio IP for Agilex 5 FPGAs, which enables the entire portfolio of video solutions, as well as additional IPs supporting MIPI D-PHY and MIPI CSI-2. Begin developing your Agilex 5 FPGA designs and rely on additional validated advanced features like JESD204C IP, ORAN IP, LDPC IP, CPRI, and eCPRI among others.
Unprecedented Capabilities: Altera FPGAs may be programmed with cutting-edge capabilities like the following using the Quartus Prime Pro Edition Software v24.1.
Agilex 5 datasheet
Dashboard for Quartus Software Exploration (Preproduction)
With distinct instances of Quartus Prime software, numerous projects running concurrently may be easily coordinated and the compilation and timing results can be seen.
Fresh Features for Compilation: Generation flow of precompiled components (PCCs)Utilising the new precompiled component (PCC) generation flow during compilation, shorten the time it takes to compile synthesis.Start the Simulator using the Quartus Prime GUI.Effortlessly start simulations straight from the Quartus Prime GUI by using the handy “Tools ➤ Run Simulation” menu item. Remove the need for extra procedures to streamline your workflow and save time.
Features and Improvements of Synthesis
Use the RTL Linter Tool to convert older RTL files to Verilog/VHDL standards with ease, optimise RAM inference for better speed and resource use, and reduce warnings in error-free RTL modules to increase readability while developing.
Improved Timing Indicator
Gain more flexibility in timing analysis and SDC scripting with new scripting options; guarantee design integrity with sign-off capabilities for particular combinational loops; and learn more about timing characteristics with enhanced Chip Planner visualisation of asynchronous clock domain crossings (CDCs).
Innovations in Advanced Link Analysers
Link Builder: Use the brand-new Link Builder tool to quickly and easily build high-speed serial connections. Streamline the connection creation procedure by automatically generating schematics and importing channels and devices.
High DPI Monitor Assistance: Benefit from improved readability and display quality thanks to GUI scaling for high DPI displays and automated DPI recognition. Enjoy enhanced usability and user experience.
Enhanced Data Viewer: With improvements to the Data Viewer, analyse forward error correction (FEC) code word faults more effectively. Error outcomes may be easily interpreted and analysed for more efficient error correction techniques.
Enhancements to Simulation Time:
Easy-to-use UI for automated import of devices and channels and schematics. Agilex 7 IP offers faster simulation times with the updated Q run and FEC models.
Qualities:
R-Tile: Transaction Layer (TL) multi-channel DMA IP (AXI) up to Gen5 x16 For flexibility in incorporating third-party PCIe Switch IP, use the bypass mode. A new design example for Gen5 x4 endpoint configuration is also provided.
F-Tile: Utilising FastSIM to reduce simulation time in PIPE mode and providing Ubuntu driver support for all sample designs.increased compatibility for up to 64 embedded endpoints.For greater coverage, the Debug Tool Kit (DTK) was added to the switch IP.
Become a Part of the Community: Hua Xue, VP & GM Software Engineering, remarked, “Intel’re excited to offer Quartus Prime Software v24.1, a crucial milestone in FPGA design.”
“Now, engineers everywhere can easily access the unmatched potential of Agilex 5 FPGAs E-Series.” Quartus’s simplified design process and these cutting-edge technologies allow engineers to reach their full potential for innovation. With their state-of-the-art processing capabilities, Agilex 5 devices transform embedded FPGA applications. These are enhanced by Quartus’s vast IP portfolio, which includes a variety of solutions like Ethernet, PCI Express, memory protocols like LPDDR5, support for MIPI D-PHY, CSI-2, and a suite of video solutions, among many other IPs.
The Quartus Exploration Dashboard offers a user-friendly interface and optimization recommendations, which further improve the design exploration process. Intel’re pushing both the simplicity of use and the fast compiler technologies with Quartus v24.1’s open access to E-Series FPGAs and a simplified design pipeline to enable engineers and innovators to unleash their creativity like never before.”
Intel Agilex 5 price
Normally marketed to corporations and incorporated into bigger systems, the Intel Agilex 5 FPGAs do not have a set pricing that is made accessible to the general public. A number of variables affect the pricing, including:
Model specifics: The Agilex 5 family has two distinct series (D and E) with differing logic cell characteristics and capacities. Models with additional features will cost more.
Volume: If you buy in large quantities, you may be able to negotiate a lower price with distributors or directly with Intel.
Distributor: Price structures may vary significantly throughout distributors.
Read more on Govindhtech.com
#Agilex#intelagilex#agilex5#intelagilex5#govindhtech#FPGA#news#technews#technology#technologynews#TechnologyTrends Intel
0 notes
Photo
Assassin's Creed Valhalla game review, Price and Buy
Assassin's Creed Valhalla is a legendary Viking quest for glory. After all, you can Raid your enemies, also grow settlement. In fact, you can build political power.
Open World: You can Sail the mysterious and harsh shores of Norway to beautiful but forbidding kingdoms of beyond and England.
Viking Saga: Of course, it has advanced RPG mechanics that will allow you to shape the growth of character and influence the world.
Settlement: You can also upgrade buildings for deep customization, including a tattoo parlor, blacksmith, barracks, and much more.
Raids: In fact, it can launch massive assaults against Saxon fortresses and troops.
Combat System: In fact, it has a Dual-wield powerful weapon like swords, axes, and shields with a ruthless fighting style of Viking warriors.
Assassin's Creed Valhalla
Summary
You will become an Eivor with a Viking raider on a quest. After all, you can also explore England's Dark Ages able to raid your enemies, you can build your political power, and be able to grow your settlement.
In fact, It can lead to Viking raids against the Saxon throughout England.
As well as, It has Dual-wield powerful weapons with a visceral fighting style.
You can challenge yourself with a varied collection of enemies.
Of course, you will be able to grow your character and carve your path to glory.
It can explore the harsh shores of Norway to kingdoms of England.
As well as, you can grow your clan's settlement to Personalize your experience.
valhalla
Assassin's Creed Valhalla PC Requirement
Processor: Intel i5 3.2GHz / AMD Ryzen 3 3.1GHz RAM: 8GB Video card: GeForce GTX 960 4GB or AMD Ryzen 9 380 4GB OS: Windows 10 64 bit Storage: 50GB DirectX Version: DirectX 12 GPU with DirectX 12 that is needed to support feature Level 12.0
Click here for more Information about whole System Requirements of this game
Note: It is a minimum system requirements of Assassin's Creed Valhalla.
Valhalla
Include and Edition?
Assassin's Creed Valhalla Standard Edition
Standard Edition $ 71.15 / € 59.99 / £ 53.81 / Rs 5,293.52
Gold Edition
Gold Edition $ 118.59 / € 99.99 / £ 89.68 / Rs 8,823.12
Ultimate Edition
Ultimate Edition $ 142.31 / € 119.99 / £ 107.62 / Rs 10,589.97
Activation: It will automatically add to your Ubisoft Connect library for download.
Developer and Publisher: Ubisoft Montreal and Ubisoft
Requirements: You will have to install Ubisoft Connect for PC
Release date: 10th November 2020
Language: English, Spain-Spanish, French, German, Italian.
Buy from Amazon.com / GTA 6 release, map, news and rumors / female protagonist
Of course, we cannot guarantee if anything is wrong on this page. In fact, you can contact us to let us know if we are missing something on our page or you need more information about this page. By the way, thank you for your visit.
#Stepphase #technologies #technology #tech #technews #techworld #techtrends #smartphone #apple #techupdates #futuretechnology #newtech #techgeek #technologynews #technologythesedays #smarttechnology #technologylover #technologytrends #technologyblog #gadgets #smartphone #gadget #marketing #digital #india #technologyisawesome #amazing #repost
0 notes
Photo
The ultimate desktop now features much faster performance, SSDs across the line, an even more stunning Retina 5K display, and higher quality camera, speakers, and mics. Apple announced on 4th August a major update to its 27-inch iMac. By far the most powerful and capable iMac ever, it features faster Intel processors up to 10 cores, double the memory capacity, next-generation AMD graphics, superfast SSDs across the line with four times the storage capacity, a new nano-texture glass option for an even more stunning Retina 5K display, a 1080p FaceTime HD camera, higher fidelity speakers, and studio-quality mics. Follow @techtrendspro for more amazing #TechUpdates . . . Also Visit 👇 https://bit.ly/2EAnidM . . . #imac #imacupdate #imacconcept #2020imqc #macmini #apple #macmini2020 #newimac #newmacmini #refinedsignanyone #airpodspro #techblog #futuretechnology #technologytrends #technologythesedays #smartphone #ios #iphonegameo #iphonegame재택근무 #iphonegameapple #applelove (at United States America) https://www.instagram.com/p/CFtWkPCAX7v/?igshid=qciyf0xpzd5t
#techupdates#imac#imacupdate#imacconcept#2020imqc#macmini#apple#macmini2020#newimac#newmacmini#refinedsignanyone#airpodspro#techblog#futuretechnology#technologytrends#technologythesedays#smartphone#ios#iphonegameo#iphonegame재택근무#iphonegameapple#applelove
0 notes
Photo
What is a good rendering machine? Currently assembling an Intel on Aorus MOBO Z390, 64gb RAM, Cooler Master liquid cooling, RTX 2070 Super, 1T SSD, and 2T SATA. #aorus #rendering3d #gigabyte #coolermaster #ssd #samsungssd #rtx #z390 #computers #hardware #renderingpc #pc #pcgaming #3drender #64gb #i7 #pcbuilds #computerparts #architecttools #architecture #tech #techie #ram #techgadgets #technologytrends #architecturalmodel (at Fulgar Architects) https://www.instagram.com/p/B8mMhAEhEAy/?igshid=1677k0sfz7mgx
#aorus#rendering3d#gigabyte#coolermaster#ssd#samsungssd#rtx#z390#computers#hardware#renderingpc#pc#pcgaming#3drender#64gb#i7#pcbuilds#computerparts#architecttools#architecture#tech#techie#ram#techgadgets#technologytrends#architecturalmodel
0 notes
Photo
Shout out to @intel for sponsoring one of my brands #GlobalGoodNetworks when I lived in Washington DC - they flew my business partner and I to innovative science and technology events to cover them and share with our global network. Further they helped sponsor our messaging campaign to 250K people around the world. I am in China now and see massive technology innovations and believe tech can be used for good when harnessed correctly. Of the many campaigns we worked on with intel one of my favorites was #progressthruprocessors which allows people to use their computers processing power to solve global issues when they are not using their computer. Very cool topic and many blockchain style brands are doing similar things to date. · · · #innovation #innovations #design #innovationliving #шугаринг #startup #innovationlab #smile #innovationhub #innovationday #innovationineducation #business #innovationaward #innovationdistrict #socialinnovation #innovationnation #designthinking #эпиляция #innovationcenter #education #technologyiscool #technologytakeover #technologyrules #technologyisawesome #technologythesedays #technology #technologytrends #neoculturetechnology (at 长沙南站) https://www.instagram.com/p/B41hSWTBueM/?igshid=dvl8ui9r5wzv
#globalgoodnetworks#progressthruprocessors#innovation#innovations#design#innovationliving#шугаринг#startup#innovationlab#smile#innovationhub#innovationday#innovationineducation#business#innovationaward#innovationdistrict#socialinnovation#innovationnation#designthinking#эпиляция#innovationcenter#education#technologyiscool#technologytakeover#technologyrules#technologyisawesome#technologythesedays#technology#technologytrends#neoculturetechnology
0 notes
Text
Wi-Fi at 20: Bridging the performance gap towards ten-gigabit speeds
The Wi-Fi Alliance recently interviewed Intel’s Doron Tal (General Manager, Wireless Infrastructure Group, Connected Home Division) about the past, present, and future of Wi-Fi. As we’ve previously discussed in The Ruckus Room, 2019 marks the 20th anniversary of the popular and ever-evolving wireless standard.
According to Tal, the average home today has approximately 10-20 devices, a number that Intel expects to increase to 30-50 devices over the next year or so.
“Those devices are connecting over Wi-Fi and need fast, responsive and reliable connections to ensure the best experiences,” he explains. “Whether you are streaming HD video or creating and editing content or immersed in an online experience like gaming and virtual reality (VR), Wi-Fi is really important.”
The emergence of Wi-Fi 6
Wi-Fi 6 (802.11ax), says Tal, is a significant step forward to deliver home connectivity that is faster, more responsive and more reliable.
“With Wi-Fi 6, you’re now able to control the traffic from the access point (AP) to the client in a very managed and provisioned manner that can actually be monetized in new ways,” he states. “We see a clear trend on the infrastructure side that deployments are shifting from a single AP to a multi-node architecture with different types of extenders.”
In the future says, Tal, the market will see reliable, smart and seamless Wi-Fi that supports immersive 3D video and augmented reality in very high definition, as well as new use cases in broadcasting, IoT, sensing and machine learning.
“The key to realizing the highly impactful Wi-Fi of the future, as these new and more diverse device types get introduced to the network, will be a lot of focus on making these networks self-organizing and self-healing so that they can be optimized for different experiences,” he adds.
Commenting on the above, Ruckus’ Jeanette Lee, Sr. Director, Product Solutions and Technical Marketing, Ruckus Networks at CommScope, tells us that that Wi-Fi 6 is well on its way to bridging the performance gap towards ten-gigabit speeds.
“Wi-Fi 6 delivers faster network performance, connects more devices simultaneously and effectively transitions Wi-Fi from a best-effort endeavor to a deterministic wireless technology,” she explains. “Designed for high-density connectivity, Wi-Fi 6 offers up to a four-fold capacity increase over its Wi-Fi 5 (802.11ac) predecessor. This further solidifies Wi-Fi’s position as the de-facto medium for internet connectivity.”
The advancements of Wi-Fi 6, says Lee, will benefit a wide range of consumer use cases, although they are particularly important for dense environments in which large numbers of users and devices are connecting to the network. Some specific scenarios that will benefit from the new Wi-Fi 6 standard include large public venues (LPVs) such as stadiums, convention centers and transportation hubs.
“Stadiums and convention centers offer high-speed Wi-Fi to improve the fan experience, increase customer interaction and create value-added services such as showing instant replays on smartphones and tablets or allowing attendees to order food from their seats,” she states. “However, stadiums and convention centers with tens of thousands of users simultaneously connecting to Wi-Fi pose definite scale and density challenges. The Wi-Fi 6 advancements around OFDMA, 1024 QAM, OBSS coloring, as well as faster PHY rates, will make it easier for LPV owners to create new business opportunities by offering enhanced services for guests.”
In addition, says Lee, public transportation hubs are increasingly offering high-speed public Wi-Fi to passengers waiting for trains, buses, taxis and ride-sharing services.
“Like stadiums, transportation hubs have high densities of people attempting to connect to the networks simultaneously. However, these hubs face the unique challenge posed by transient devices that are not connecting to the Wi-Fi network but are still sending management traffic that congests it. OFDMA and BSS coloring, both of which are part of the new Wi-Fi 6 standard, provide the tools to manage and mitigate these challenges,” she concludes.
The post Wi-Fi at 20: Bridging the performance gap towards ten-gigabit speeds appeared first on The Ruckus Room.
from The Ruckus Room https://theruckusroom.ruckuswireless.com/wired-wireless/technologytrends/wi-fi-at-20-bridging-the-performance-gap-towards-ten-gigabit-speeds/
Wi-Fi at 20: Bridging the performance gap towards ten-gigabit speeds was originally published on owenstrachan.com
from https://owenstrachan.com/wi-fi-at-20-bridging-the-performance-gap-towards-ten-gigabit-speeds/
0 notes
Text
Intel VTune Profiler For Data Parallel Python Applications
Intel VTune Profiler tutorial
This brief tutorial will show you how to use Intel VTune Profiler to profile the performance of a Python application using the NumPy and Numba example applications.
Analysing Performance in Applications and Systems
For HPC, cloud, IoT, media, storage, and other applications, Intel VTune Profiler optimises system performance, application performance, and system configuration.
Optimise the performance of the entire application not just the accelerated part using the CPU, GPU, and FPGA.
Profile SYCL, C, C++, C#, Fortran, OpenCL code, Python, Google Go, Java,.NET, Assembly, or any combination of languages can be multilingual.
Application or System: Obtain detailed results mapped to source code or coarse-grained system data for a longer time period.
Power: Maximise efficiency without resorting to thermal or power-related throttling.
VTune platform profiler
It has following Features.
Optimisation of Algorithms
Find your code’s “hot spots,” or the sections that take the longest.
Use Flame Graph to see hot code routes and the amount of time spent in each function and with its callees.
Bottlenecks in Microarchitecture and Memory
Use microarchitecture exploration analysis to pinpoint the major hardware problems affecting your application’s performance.
Identify memory-access-related concerns, such as cache misses and difficulty with high bandwidth.
Inductors and XPUs
Improve data transfers and GPU offload schema for SYCL, OpenCL, Microsoft DirectX, or OpenMP offload code. Determine which GPU kernels take the longest to optimise further.
Examine GPU-bound programs for inefficient kernel algorithms or microarchitectural restrictions that may be causing performance problems.
Examine FPGA utilisation and the interactions between CPU and FPGA.
Technical summary: Determine the most time-consuming operations that are executing on the neural processing unit (NPU) and learn how much data is exchanged between the NPU and DDR memory.
In parallelism
Check the threading efficiency of the code. Determine which threading problems are affecting performance.
Examine compute-intensive or throughput HPC programs to determine how well they utilise memory, vectorisation, and the CPU.
Interface and Platform
Find the points in I/O-intensive applications where performance is stalled. Examine the hardware’s ability to handle I/O traffic produced by integrated accelerators or external PCIe devices.
Use System Overview to get a detailed overview of short-term workloads.
Multiple Nodes
Describe the performance characteristics of workloads involving OpenMP and large-scale message passing interfaces (MPI).
Determine any scalability problems and receive suggestions for a thorough investigation.
Intel VTune Profiler
To improve Python performance while using Intel systems, install and utilise the Intel Distribution for Python and Data Parallel Extensions for Python with your applications.
Configure your Python-using VTune Profiler setup.
To find performance issues and areas for improvement, profile three distinct Python application implementations. The pairwise distance calculation algorithm commonly used in machine learning and data analytics will be demonstrated in this article using the NumPy example.
The following packages are used by the three distinct implementations.
Numpy Optimised for Intel
NumPy’s Data Parallel Extension
Extensions for Numba on GPU with Data Parallelism
Python’s NumPy and Data Parallel Extension
By providing optimised heterogeneous computing, Intel Distribution for Python and Intel Data Parallel Extension for Python offer a fantastic and straightforward approach to develop high-performance machine learning (ML) and scientific applications.
Added to the Python Intel Distribution is:
Scalability on PCs, powerful servers, and laptops utilising every CPU core available.
Assistance with the most recent Intel CPU instruction sets.
Accelerating core numerical and machine learning packages with libraries such as the Intel oneAPI Math Kernel Library (oneMKL) and Intel oneAPI Data Analytics Library (oneDAL) allows for near-native performance.
Tools for optimising Python code into instructions with more productivity.
Important Python bindings to help your Python project integrate Intel native tools more easily.
Three core packages make up the Data Parallel Extensions for Python:
The NumPy Data Parallel Extensions (dpnp)
Data Parallel Extensions for Numba, aka numba_dpex
Tensor data structure support, device selection, data allocation on devices, and user-defined data parallel extensions for Python are all provided by the dpctl (Data Parallel Control library).
It is best to obtain insights with comprehensive source code level analysis into compute and memory bottlenecks in order to promptly identify and resolve unanticipated performance difficulties in Machine Learning (ML), Artificial Intelligence ( AI), and other scientific workloads. This may be done with Python-based ML and AI programs as well as C/C++ code using Intel VTune Profiler. The methods for profiling these kinds of Python apps are the main topic of this paper.
Using highly optimised Intel Optimised Numpy and Data Parallel Extension for Python libraries, developers can replace the source lines causing performance loss with the help of Intel VTune Profiler, a sophisticated tool.
Setting up and Installing
1. Install Intel Distribution for Python
2. Create a Python Virtual Environment
python -m venv pyenv
pyenv\Scripts\activate
3. Install Python packages
pip install numpy
pip install dpnp
pip install numba
pip install numba-dpex
pip install pyitt
Make Use of Reference Configuration
The hardware and software components used for the reference example code we use are:
Software Components:
dpnp 0.14.0+189.gfcddad2474
mkl-fft 1.3.8
mkl-random 1.2.4
mkl-service 2.4.0
mkl-umath 0.1.1
numba 0.59.0
numba-dpex 0.21.4
numpy 1.26.4
pyitt 1.1.0
Operating System:
Linux, Ubuntu 22.04.3 LTS
CPU:
Intel Xeon Platinum 8480+
GPU:
Intel Data Center GPU Max 1550
The Example Application for NumPy
Intel will demonstrate how to use Intel VTune Profiler and its Intel Instrumentation and Tracing Technology (ITT) API to optimise a NumPy application step-by-step. The pairwise distance application, a well-liked approach in fields including biology, high performance computing (HPC), machine learning, and geographic data analytics, will be used in this article.
Summary
The three stages of optimisation that we will discuss in this post are summarised as follows:
Step 1: Examining the Intel Optimised Numpy Pairwise Distance Implementation: Here, we’ll attempt to comprehend the obstacles affecting the NumPy implementation’s performance.
Step 2: Profiling Data Parallel Extension for Pairwise Distance NumPy Implementation: We intend to examine the implementation and see whether there is a performance disparity.
Step 3: Profiling Data Parallel Extension for Pairwise Distance Implementation on Numba GPU: Analysing the numba-dpex implementation’s GPU performance
Boost Your Python NumPy Application
Intel has shown how to quickly discover compute and memory bottlenecks in a Python application using Intel VTune Profiler.
Intel VTune Profiler aids in identifying bottlenecks’ root causes and strategies for enhancing application performance.
It can assist in mapping the main bottleneck jobs to the source code/assembly level and displaying the related CPU/GPU time.
Even more comprehensive, developer-friendly profiling results can be obtained by using the Instrumentation and Tracing API (ITT APIs).
Read more on govindhtech.com
#Intel#IntelVTuneProfiler#Python#CPU#GPU#FPGA#Intelsystems#machinelearning#oneMKL#news#technews#technology#technologynews#technologytrends#govindhtech
2 notes
·
View notes
Text
Dominate the Battlefield: Intel Battlemage GPUs Revealed
Intel Arc GPU
After releasing its first-generation Arc Alchemist GPUs in 2022, Intel now seems to be on a two-year cadence, as seen by the appearance of the Battlemage in a shipping manifest. This suggests that Battlemage GPUs are being supplied to Intel’s partners for testing, as it’s the first time they’ve seen any proof of them existing in the real world. Intel is probably getting ready for a launch later this year given the timing of this.
Two Battlemage GPUs are being shipped by Intel to its partners, per a recently discovered shipment manifest that was published on X. The GPUs’ designations, G10 and G21, suggest Intel is taking a similar approach as Alchemist, offering one SKU that is more or less high-end for “mainstream” gamers and one that is less expensive.
Intel Arc Graphics Cards
As you may remember, Intel had previously announced plans to launch four GPUs in the Alchemist family:
Intel Arc A380
The A380, A580, A750, and A770. However, only the latter two were officially announced. They anticipate that the A750 and A770, which Intel most likely delivers at launch for midrange gamers, will be replaced by the G10.
They’ve never heard of cards being “in the wild,” but two Battlemage GPUs have shown up in the Si Soft benchmark database before. The fact that both of those cards have 12GB of VRAM stood out as particularly noteworthy. This suggests that Intel increased their base-level allowance from 8GB, which is a wise decision in 2024. As stated by Intel’s CEO earlier this year, Battlemage was “in the labs” in January.
Intel Arc A770
A previously released roadmap from Intel indicates that the G10 is a 150W component and the G21 is 225W. It is anticipated that Intel will reveal notable improvements in Battlemage’s AI capabilities, greater upscaling performance, and ray tracing performance. As 225W GPUs were the previous A750 and A770, it seems Battlemage will follow the script when it comes to its efficiency goals. The business has previously declared that it wishes to aim for this “sweet spot” in terms of power consumption, wherein one PCIe power cable is needed rather than two (or three).
While the industry as a whole is anxious to see how competitive Intel will be with its second bite at the apple, gamers aren’t exactly waiting impatiently for Intel to introduce its GPUs like they do with Nvidia or AMD’s next-gen. Even if the company’s Alchemist GPUs were hard to suggest when they first came out, significant performance advancements have been made possible by the company’s drivers.
The Intel Battlemage G10 and G21 next-generation discrete GPUs, which have been observed in shipment manifests, are anticipated to tackle entry into the mid-range market. They already know from the horse’s mouth that Intel is working on its next generation of discrete graphics processors, which it has revealed are being code-named Battlemage. The company is developing at least two graphics processing units, according to shipping excerpts.
Intel Battlemage GPUs
The shipping manifest fragments reveal that Intel is working on several GPUs specifically for the Battlemage G10 and G21 versions. The newest versions in Intel’s graphics processor lineup include the ACM-G11, an entry-level graphics processor, and the ACM-G10, a midrange market positioning and higher-end silicon graphics processor. As a result, the names Battlemage-G10 and Battlemage-G21, which are aimed at entry-level PCs and bigger chips, respectively, match the present names for Intel’s Arc graphics processors. Both stand a strong chance of making their list of the best graphics cards if they deliver acceptable levels of performance.
The Battlemage-G10 and Battlemage-G21 are being shipped for research and development, as stated in the shipping manifest (which makes sense considering these devices’ current status). The G21 GPU is currently in the pre-qualification (pre-QS) stage of semiconductor development; the G10’s current status is unknown.
Pre-qualification silicon is used to assess a chip’s performance, reliability, and functionality. Pre-QS silicon is typically not suitable for mass production. However, if the silicon device is functional and meets the necessary performance, power, and yield requirements, mass production of the device could be feasible. For example, AMD’s Navi 31 GPU, if it meets the developer’s objectives, is mass-produced in its A0 silicon phase.
They rarely get to cover Intel’s developments with its next-generation graphics cards, but they frequently cover Nvidia’s, as they did recently with the GeForce RTX 50-series graphics processors, which should appear on their list of the best graphics cards based on industry leaks.
This generation, Nvidia seems to be leading the laptop discrete GPU market, but Battlemage, with Intel’s ties to OEMs and PC manufacturers, might give the green team some serious competition in the next round. According to the cargo manifest, there will be intense competition among AMD’s RDNA 4, Intel’s Battlemage, and Nvidia’s Blackwell in the forthcoming desktop discrete GPU market.
Qualities:
Targeting Entry-Level and Mid-Range: The ACM-G11 and ACM-G10, the successors to the existing Intel Arc Alchemist series, are probably meant for gamers on a tight budget or seeking good performance in games that aren’t AAA.
Better Architecture: Compared to the Xe-HPG architecture found in Intel’s existing Arc GPUs, readers can anticipate an upgrade in this next-generation design. Better performance per watt and even new features could result from this.
Emphasis on Power Efficiency: These GPUs may place equal emphasis on efficiency and performance because power consumption is a significant element in laptops and tiny form factor PCs.
Potential specifications (derived from the existing Intel Arc lineup and leaks):
Production Process: TSMC 6nm (or, if research continues, a more sophisticated node) Unknown is the core configuration. Possibly less cores than Battlemage models at higher levels (should any exist).
Memory: GDDR6 is most likely used, yet its bandwidth and capacity are unclear. Power Consumption: Designed to use less power than GPUs with higher specifications.
FAQS
What are the Battlemage G10 and G21 GPUs?
Intel is developing the Battlemage G10 and G21, next-generation GPUs that should provide notable gains in capabilities and performance over their predecessors.
What markets or segments are these GPUs targeting?
Targeting a wide range of industries, including professional graphics, gaming, and data centres, the Battlemage G10 and G21 GPUs are expected to meet the demands of both consumers and businesses.
Read more on Govindhtech.com
#Intel#IntelArc#intelarcgpu#govindhtech#INTELARCA380#intelarca770#battlemagegpu#G10#G21#news#technologynews#technology#technologytrends
2 notes
·
View notes
Text
ASRock Mars RPL Series: Intel 13th Gen Powered Compact PCs
The Mars RPL Series mini PC has been released by ASRock, a world leader in motherboards, graphics cards, gaming monitors, and small form factor PCs. The 13th Gen Intel Core i5-1335U CPU powers one of the two versions in this series, while the 12th Gen Intel Celeron 7305 chip powers the other. High-performance computing in a stylish, space-saving form factor that is appropriate for a variety of applications is provided by these small yet powerful PCs, which are made to meet the demands of different users.
Exceptional Performance with Energy Efficiency
The two models of ASRock’s Mars RPL series, which are intended to satisfy the various demands of contemporary consumers, are powered by the 13th Gen Intel Core i5-1335U and the 12th Gen Intel Celeron 7305 processors, respectively. Both models are perfect for a variety of applications, including as routine office work, multimedia editing, and demanding computing workloads, since they combine excellent performance and energy economy. The Mars RPL series raises the bar for small and effective computing systems with its remarkable multitasking capabilities and low power consumption.
Versatile Application Scenarios
The Mars RPL series performs exceptionally well in a variety of settings, including as remote work, interactive multimedia classrooms, and digital signs. An immersive experience is offered by its quad display output and outstanding graphics performance, and data transmission is made easier by the integrated SD card reader, which also adds convenience and increases office efficiency.
Advanced Thermal Solution
The Mars RPL series has an improved thermal solution with enlarged ventilation and heat pipes to effectively disperse heat, ensuring excellent stability and maintaining system coolness even under demanding workloads.
Flexible Expansion and Connectivity Options
The Mars RPL series allows for flexible storage expansion with two M.2 connectors and an SD card slot. To meet a variety of user demands, the Thunderbolt 4 Type-C connector also offers fast data transmission and flexible peripheral connectivity.
Setting a new standard for micro PCs, the Mars RPL series combines state-of-the-art performance, adaptable functionality, and a small form factor. It is positioned to become a market leader in the tiny PC space and offers outstanding value for business, education, or enjoyment.
Mars RPL Series
0.7 L Mini PC
CPU: Equipped with 13th Gen Intel Core i5-1335U processor (Raptor Lake-U)
Memory: Supports dual-channel DDR5-5200MHz memory, up to 96GB
Quad Video Outputs: 1 x Intel Thunderbolt 4 1 x HDMI 1 x D-Sub 1 x USB Type-C Alt mode (only supports 20V Power Delivery in)
Triple Storage Devices: 1 x Hyper M.2 Socket (PCIe Gen4 x4) 1 x Ultra M.2 Socket (PCIe Gen3 x4 & SATA3 6.0 Gb/s) 1 x SD Card Reader
Abundant USB Ports: 1 x USB 4.0 Thunderbolt 4 Type-C 4 x USB 3.2 Gen2 Type-A (10 Gb/s) 2 x USB 2.0
Power Meets Elegance in a Compact Form
Presenting the Mars 1335U, a little yet mighty micro PC made for creatives and professionals. It performs exceptionally well in difficult work because to its dual-channel DDR5 RAM, Intel Core i5-1335U CPU, and sophisticated display outputs. This stylish gadget offers outstanding performance and adaptability in a space-saving design, making it perfect for home entertainment, creative endeavors, workplace productivity, and remote learning.
Versatile Applicataions: Where Mars 1335U Shines
The Mars 1335U is appropriate for a variety of settings due to its small size and strong features:
Unmatched Monitoring Capabilities
With the Mars 1335U’s quad display output, which offers thorough multi-feed monitoring in a small, strong package, you may increase your security.
Office Productivity Without Effort
With the Mars 1335U’s integrated SD card reader, you can streamline your workflow and transfer data with ease in a compact, effective design.
Solutions for Dynamic Digital Signage
With Mars 1335U’s capability for four displays, you can create eye-catching, multi-screen digital signage that both informs and captivates your audience.
Powerful Compact Retail Kiosk
The Mars 1335U offers high-speed performance in a stylish appearance that is ideal for contemporary retail kiosks, ensuring a flawless customer experience.
Classrooms using Interactive Multimedia
The multi-display capabilities of the Mars 1335U enhance the educational experience by bringing dynamic and interactive information to life in classroom environments.
Exceptional Processing Power and Energy Efficiency
The Mars 1335U has a strong, energy-efficient Intel Core i5-1335U processor. With its low power consumption, this CPU is ideal for multitasking and system integration. It’s ideal for professionals that want powerful computing in a tiny size.
Lightning-Fast Dual-Channel DDR5 Memory
The Mars 1335U’s dual-channel DDR5 RAM allows for smooth multitasking and blazingly quick data transmission. This keeps your workflow unbroken and guarantees seamless functioning even while resource-intensive programs are running.
Stunning Visuals with Quad Monitor Display Outputs
With support for Intel Iris Xe Graphics, the Mars 1335U offers breathtaking images on many screens. Versatile multi-monitor configurations are made possible by the four display output options: D-Sub, HDMI, Thunderbolt 4, USB Type-C for Power Delivery, and DisplayPort. In addition to providing display output, the USB Type-C connector enables you to charge the tiny PC straight from your monitor, increasing productivity and providing a more organized, effective workstation.
Seamless Connectivity
With its Thunderbolt 4 Type-C connector, the Mars 1335U provides outstanding connectivity, with speeds of up to 40 Gb/s for dependable and quick connections to high-performance peripherals. Additionally, you may benefit from quick data transmission, safe charging, and simple device management thanks to the several USB ports, which include USB 3.2 Gen2 and USB 2.0. The Mars 1335U may be used for powering devices, connecting external displays, or transferring huge data.
Versatile Expansion Options
With its two M.2 connections and SD card slot, the Mars 1335U offers versatile storage and was designed with expansion in mind. This small PC enables you to customize your system to meet your unique requirements, guaranteeing optimal performance and storage capacity for every work, whether you need to add ultra-fast NVMe SSDs or more storage via SATA.
Keyboard Wake-Up Functionality
The Mars 1335U’s unique keyboard wake-up function improves convenience. Simply hitting Ctrl+Esc will turn on the machine from a G3 state if your keyboard is plugged into the appropriate USB connection. This feature facilitates a faster and more effective workflow by streamlining the startup procedure. When the Mars 1335U is positioned in an enclosed area or behind a monitor, where direct access to the power button is problematic, it is very helpful. This design is useful and improves user convenience.
Advanced Thermal Solution
The Mars 1335U’s new thermal solution offers improved cooling. We’ve added extra air vents to the top cover to increase the ventilation to fit the new cooling system, which greatly improves airflow and cooling effectiveness. For improved heat dissipation, the motherboard design has evenly spaced memory slots beneath the CPU cooler. Furthermore, a heat pipe has been incorporated to transfer heat from the CPU to the exhaust vents, guaranteeing system stability and excellent thermal performance even under high workloads.
Reliable Network Connectivity
With its Gigabit LAN and M.2 slot for Wi-Fi modules, the Mars 1335U offers a variety of wired and wireless networking choices. This small PC can accommodate both wired and wireless connections, depending on your preference. The Intel AX210 wireless adapter, which comes with device, provides fast Wi-Fi 6E connectivity straight out of the box.
Read more on Govindhtech.com
#Minipcs#ASRock#ASROCKminipcs#Intel#govindhtech#NEWS#TechNews#technology#technologies#technologytrends#technologynews
1 note
·
View note
Text
How Post-Quantum Cryptography Provides Future-Proof Security
Use the Intel Cryptography Primitives Library to Prepare for Post-Quantum Security.
The Importance of Cryptography for All of Us
Due to the widespread use of digital technology in many facets of everyday life, such as healthcare, economics, and communication (messengers), cryptography is essential in the contemporary world. In a setting where information may be readily intercepted, altered, or stolen, it offers the tools to protect data and guarantee privacy, integrity, and authenticity. Digital signatures, device key authentication, and encryption/decryption all aid in the protection of private information and the verification of its validity.
Developing future-proof security techniques that will remain dependable and trustworthy long after quantum computers become accessible is the challenge of a post-quantum computing world. Even those, it is assumed, will not be able to crack post-quantum encryption in a practical and acceptable amount of time.
RSA and ECC (Elliptic Curve Cryptography) are two examples of encryption, data authentication, and integrity techniques that rely on the difficulty of solving specific mathematical problems, such as discrete logarithms and integer factorization, that are computationally impossible for classical computers to solve in any given amount of time. They are almost indestructible because of this.
But that is about to change. Shor’s Factoring Algorithm and other related algorithms will probably be used more effectively by quantum computers to tackle these issues. The process of determining the prime numbers needed for RSA, ECC, and digital signature encryption may be accelerated exponentially by these new techniques. All of a sudden, the widely used encryption techniques for critical data storage and internet communication will become outdated. Data security will be compromised.
The Challenge of a Post-Quantum Computing World
Researchers in the field of cryptography are developing new security measures to combat the potential danger posed by the usage of quantum computers and their capacity to solve certain mathematical problems rapidly. Creating alternative encryption and decryption-based security methods that do not depend on the mathematical issues that quantum computers excel at solving is the obvious goal.
These new techniques use a variety of challenging challenges that would be difficult for even quantum computers to solve. Hash-based algorithms and sophisticated lattice multiplication are popular strategies for keeping up with the development of quantum computers.
In a wide range of use cases, post-quantum algorithms are and will continue to be just as significant as conventional cryptography techniques.
Apple’s iMessage mobile messaging service, which uses the PQ3 post-quantum cryptographic protocol, is one example of a use case that has already made it into the real world.
At the 4th NIST PQC Standardization Conference, NIST and IDEMEA, a French multinational technology business that specializes in identification and authentication-related security services, presented their recommendations for post-quantum protocols for banking applications. The first three NIST-backed Finalized Post-Quantum Encryption Standards were released as a result of this work and several additional contributions made as part of the NIST Post-Quantum Cryptography PQC.
Establishing forward secrecy requires the business to include post-quantum techniques early on, even before quantum computers are generally accessible. The possibility of decrypting previously intercepted and recorded encrypted communications at a later period is known as “retrospective decryption.” It is reasonable to suppose that data that has been encrypted using conventional techniques will be gathered and kept until new decryption technology becomes accessible. It is advisable to have a forward-looking security posture in order to reduce that risk.
The ideal scenario is shown in Figure 1. Long before the first massive quantum computers are constructed, cryptography applications should begin the shift to post quantum cryptography.Image Credit To Intel
Working on a Future-Proof Solution
It is advised to execute the transition in hybrid mode since methods other than the first three chosen during the NIST competition are still being researched. Combining post-quantum and classical cryptographic techniques is known as a “hybrid.”
For example, it can combine two cryptographic elements to generate a single Kyber512X key agreement:
X25519 is a traditional cryptography key agreement system;
Kyber512 is a post-quantum key encapsulation mechanism that is impervious to cryptanalytic and quantum computer assaults.
Using a hybrid has the benefit of protecting the data against non-quantum attackers, even in the event that Kyber512 proves to be flawed.
It is crucial to remember that security encompasses both the algorithm and the implementation. For example, even if Kyber512 is completely safe, an implementation may leak via side channels. When discussing cryptography, security comes first. The drawback is that two key exchanges are carried out, which uses more CPU cycles and data on the wire.
Overview of the Intel Cryptography Primitives Library
A collection of cryptographic building blocks that is safe, quick, and lightweight, the Intel cryptographic Primitives collection is well-suited for a range of Intel CPUs (link to documentation).
You can find it on GitHub.
Support for Many Cryptographic Domains
A wide range of procedures often used for cryptographic operations are included in the library, including:Image Credit To Intel
Benefits of Using the Intel Cryptography Primitives Library
Using the Intel Cryptography Primitives Library Security (secret processing operations are executed in constant time)
Created with a tiny footprint in mind.
Supported hardware cryptography instructions are optimized for various Intel CPUs and instruction set architectures:
Intel SSE2 (Intel Streaming SIMD Extensions 2)
SSE3 Intel
SSE4.2 from Intel
Advanced Vector Extensions from Intel (Intel AVX)
Advanced Vector Extensions 2 (AVX2) by Intel
Intel Advanced Vector Extensions 512 (AVX-512)
CPU dispatching that may be adjusted for optimal performance
Compatibility with kernel mode
Design that is thread-safe
FIPS 140-3 compliance building blocks (self-tests, services) are supported by the Intel Cryptography Primitives Library.
Algorithms for Post-quantum Cryptography in the Intel Cryptography Primitives Collection
The eXtended Merkle Signature Scheme (XMSS) and Leighton-Micali Signature (LMS), both stateful hash-based signature schemes, are now supported for digital signature verification by the Cryptography Primitives Library. NIST has standardized both algorithms (NIST SP 800-208).
Using XMSS and LMS Cryptography
The documentation for the Intel Cryptography Primitives Library offers thorough examples of how to utilize both:
Scheme for Verifying XMSS Signatures
Verification of LMS Signatures
Special functions, like as getters and setters, that are necessary to invoke algorithms are provided by the library implementations.
Comparing ECDSA and LMS Verification Usage
Intel Cryptography Primitives Library supports Post-Quantum Security using hash-based cryptography algorithms like XMSS and LMS. The lead the deployment of the latest post-quantum cryptography technologies and closely monitor standard development at NIST’s Post Quantum Cryptography PQC.
Special functions, like as getters and setters, that are necessary to invoke algorithms are provided by the library implementations.
Add Post Quantum Security to Your Application
Intel Cryptography Primitives Library supports Post-Quantum Security using hash-based cryptography algorithms like XMSS and LMS.
It lead the deployment of the latest post-quantum cryptography technologies and closely monitor standard development at NIST’s Post Quantum Cryptography PQC.
Read more on Govindhtech.com
#PostQuantum#QuantumCryptography#Intel#quantumcomputers#Datasecurity#quantumalgorithms#IntelCPUs#News#Technews#Technologynews#Technology#Technologytrendes#govindhtech
0 notes
Text
Watsonx.data Presto C++ With Intel Sapphire Rapids On AWS
Using Watsonx.data Presto C++ with the Intel Sapphire Rapid Processor on AWS to speed up query performance
Over the past 25 years, there have been notable improvements in database speed due to IBM and Intel’s long-standing cooperation. The most recent generation of Intel Xeon Scalable processors, when paired with Intel software, can potentially improve IBM Watsonx.data performance, according to internal research conducted by IBM.
A hybrid, managed data lake house, IBM Watsonx.data is tailored for workloads including data, analytics, and artificial intelligence. Using engines like Presto and Spark to drive corporate analytics is one of the highlights. Watsonx.data also offers a single view of your data across hybrid cloud environments and a customizable approach.
Presto C++
The next edition of Presto, called Presto C++, was released by IBM in June. It was created by open-source community members from Meta, IBM, Uber, and other companies. The Velox, an open-source C++ native acceleration library made to be compatible with various compute engines, was used in the development of this query engine in partnership with Intel. In order to further improve query performance through efficient query rewrite, IBM also accompanied the release of the Presto C++ engine with a query optimizer built on decades of experience.
Summary
A C++ drop-in replacement for Presto workers built on the Velox library, Presto C++ is also known as the development name Prestissimo. It uses the Proxygen C++ HTTP framework to implement the same RESTful endpoints as Java workers. Presto C++ does not use JNI and does not require a JVM on worker nodes because it exclusively uses REST endpoints for communication with the Java coordinator and amongst workers.
Inspiration and Goals
Presto wants to be the best data lake system available. The native Java-based version of the Presto evaluation engine is being replaced by a new C++ implementation using Velox in order to accomplish this goal.
In order to allow the Presto community to concentrate on more features and improved connectivity with table formats and other data warehousing systems, the evaluation engine has been moved to a library.
Accepted Use Cases
The Presto C++ evaluation engine supports just certain connectors.
Reads and writes via the Hive connection, including CTAS, are supported.
Only reads are supported for iceberg tables.
Both V1 and V2 tables, including tables with delete files, are supported by the Iceberg connector.
TPCH.naming=standard catalog property for the TPCH connector.
Features of Presto C++
Task management: Users can monitor and manage tasks using the HTTP endpoints included in Presto C++. This tool facilitates tracking ongoing procedures and improves operational oversight.
Data processing across a network of nodes can be made more effective by enabling the execution of functions on distant nodes, which improves scalability and distributed processing capabilities.
For secure internal communication between nodes, authentication makes use of JSON Web Tokens (JWT), guaranteeing that data is safe and impenetrable while being transmitted.
Asynchronous data caching with prefetching capabilities is implemented. By anticipating data demands and caching it beforehand, this maximizes processing speed and data retrieval.
Performance Tuning: Provides a range of session parameters, such as compression and spill threshold adjustments, for performance tuning. This guarantees optimal performance of data processing operations by enabling users to adjust performance parameters in accordance with their unique requirements.
Limitations of Presto C++
There are some drawbacks to the C++ evaluation engine:
Not every built-in function is available in C++. A query failure occurs when an attempt is made to use unimplemented functions. See Function Coverage for a list of supported functions.
C++ does not implement all built-in types. A query failure will occur if unimplemented types are attempted to be used.
With the exception of CHAR, TIME, and TIME WITH TIMEZONE, all basic and structured types in Data Types are supported. VARCHAR, TIMESTAMP, and TIMESTAMP WITH TIMEZONE are subsumptions of these.
The length n in varchar[n] is not honored by Presto C++; it only supports the limitless length VARCHAR.
IPADDRESS, IPPREFIX, UUID, KHYPERLOGLOG, P4HYPERLOGLOG, QDIGEST, TDIGEST, GEOMETRY, and BINGTILE are among the types that are not supported.
The C++ evaluation engine does not use all of the plugin SPI. Specifically, several plugin types are either fully or partially unsupported, and C++ workers will not load any plugins from the plugins directory.
The C++ evaluation engine does not support PageSourceProvider, RecordSetProvider, or PageSinkProvider.
Block encodings, parametric types, functions, and types specified by the user are not supported.
At the split level, the event listener plugin is not functional.
See Remote Function Execution for information on how user-defined functions differ from one another.
The C++ evaluation engine has a distinct memory management system. Specifically:
There is no support for the OOM killer.
There is no support for the reserved pool.
Generally speaking, queries may utilize more memory than memory arbitration permits. Refer to Memory Management.
Functions
reduce_agg
Reduce_agg is not allowed to return null in the inputFunction or the combineFunction of C++-based Presto. This is acceptable but ill-defined behavior in Presto (Java). See reduce_agg for more details about reduce_agg in Presto.
Amazon Elastic Compute Cloud (EC2) R7iz instances are high-performance CPU instances that are designed for memory. With a sustained all-core turbo frequency of 3.9 GHz, they are the fastest 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) instances available in the cloud. R7iz instances can lower the total cost of ownership (TCO) and provide performance improvements of up to 20% over Z1d instances of the preceding generation. They come with integrated accelerators such as Intel Advanced Matrix Extensions (Intel AMX), which provide a much-needed substitute for clients with increasing demands for AI workloads.
R7iz instances are well-suited for front-end Electronic Design Automation (EDA), relational database workloads with high per-core licensing prices, and workloads including financial, actuarial, and data analytics simulations due to their high CPU performance and large memory footprint.
IBM and Intel have collaborated extensively to offer open-source software optimizations to Watsonx.data, Presto, and Presto C++. In addition to the hardware enhancements, Intel 4th Gen Xeon has produced positive Watsonx.data outcomes.
Based on publicly available 100TB TPC-DS Query benchmarks, IBM Watsonx.data with Presto C++ v0.286 and query optimizer on AWS ROSA, running on Intel processors (4th generation), demonstrated superior price performance over Databrick’s Photon engine, with better query runtime at comparable cost.
Read more on Govindhtech.com
#AWS#PrestoC++#Intel#C++#C++evaluation#R7izinstances#C++engine#Watsonx.data#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
0 notes
Text
Intel Core Ultra 200S: Intel’s First AI PC Desktop Processor
Intel Core Ultra 200S desktop CPUs
The first enthusiast desktop AI PCs were introduced by Intel with the release of their Intel Core Ultra 200S desktop CPUs (codenamed Arrow Lake-S). With outstanding AI and content creation performance, an immersive gaming experience, and ground-breaking power reductions across daily applications, gaming, and creation applications, the new CPUs are the ultimate enthusiast option.
Together with the Intel Core Ultra 200V mobile processors (codenamed Lunar Lake), the new CPUs will enable the AI PC to reach previously unheard-of scale. PCs with the newest Intel Core Ultra CPUs enable customers to fully benefit from AI through partnerships with top independent software suppliers, extensive ecosystem support, and over 500 optimized AI models.
Beginning on October 24, the world’s leading OEM partners will use Intel Core Ultra 200S desktop processors to power the most comprehensive and powerful desktop AI PCs available. The new Intel Core Ultra 200S series CPU family, which Intel unveiled today, will introduce the first enthusiast desktop AI PCs and extend AI PC capabilities to desktop platforms.
With five unlocked desktop processors featuring up to eight next-generation Performance-cores (P-cores), the fastest cores available for desktop PCs, and up to sixteen next-generation Efficient-cores (E-cores), the latest generation of enthusiast desktop processors, led by the Intel Core Ultra 9 processor 285K, can achieve up to 14% more performance in multi-threaded workloads than the previous generation. With a built-in Xe GPU and cutting-edge media support, the new line is the first NPU-enabled desktop processors for enthusiasts.
With up to 58% less package power in daily applications and up to 165W less system power when gaming, Intel Core Ultra 200S desktop processors offer a historic power consumption decrease made possible by the most recent Intel core and efficiency advancements. In addition to offering up to 6% faster single-threaded and up to 14% faster multi-threaded performance compared to the previous generation, the new processor family combines increased efficiency with higher performance.
Complete AI capabilities, driven by the CPU, GPU, and NPU, give enthusiasts the strong and intelligent performance they require for gaming and content production, all while using less energy. The Intel Core Ultra 200S series processors, which are the first to make the AI PC available to enthusiasts, outperform rival flagship processors by up to 50% in AI-enabled creative applications. The recently released NPU makes it possible to offload AI tasks. Examples include enabling accessibility use cases like face- and gesture-tracking in games with no performance impact, drastically lowering power consumption in AI workloads, and freeing up discrete GPUs to boost gaming frame rates.
Concerning Intel’s First AI Computer for Fans: The new Intel Core Ultra 200S series CPU is Intel’s first and greatest desktop processor for AI PCs, with up to 36 platform TOPS.
The All-Inclusive Enthusiast Solution: Intel Core Ultra 200S series processors enhance gaming performance by up to 28% when compared to rival flagship processors, and also include outstanding AI and content production capabilities.
New Intel 800 Series Chipset: With up to 24 PCIe 4.0 lanes, up to 8 SATA 3.0 ports, and up to 10 USB 3.2 ports, the new Intel 800 Series chipset expands platform compatibility for Intel Core Ultra 200S series processors, enabling enthusiasts to benefit from the newest storage, connectivity, and other technologies.
Intel Core Ultra 200S series CPUs offer improved overclocking capabilities with fine-grained settings and a peak turbo frequency.
in P-core and E-core increments of 16.6 MHz. The Intel Extreme Tuning Utility now has one-click overclocking capability, and a new memory controller supports fast, modern XMP and CUDIMM DDR5 memory up to 48GB per DIMM for a total of up to 192GB.
Leading Connectivity: Intel Core Ultra 200S processors provide two integrated Thunderbolt 4 connections, 20 CPU PCIe 5.0 lanes, 4 CPU PCIe 4.0 lanes, Wi-Fi 6E, and Bluetooth 5.3. Through intelligent AP selection and switching, bandwidth analysis and management, and application priority auto-detection, Intel Killer Wi-Fi offers enhanced wireless performance and facilitates fluid, engaging online gaming.
Multi-Engine Security: For demanding AI workloads, the Intel Silicon Security Engine maintains great performance while protecting code integrity and data confidentiality.
Intel Core Ultra 200S Availability
When It’s Available: Beginning on October 24, 2024, Intel Core Ultra 200S series CPUs will be sold retail online, in-store, and through OEM partner systems.
Read more on Govindhtech.com
#Intelcoreultra200S#IntelCoreUltra#Intel#desktop#DesktopCPUs#govindhtech#news#TechNews#Technology#technologynews#technologytrends#cpu
0 notes
Text
Intel And AWS Deepen Chip Manufacturing Partnership In U.S.
US-Based Chip Manufacturing Advances as Intel and AWS Deepen Their Strategic Partnership
Intel produces custom chips on Intel 18A for AI Fabric and custom Xeon 6 processors on Intel 3 for AWS in a multi-billion dollar deal to accelerate Ohio-based chip manufacturing.
AWS and Intel
Intel and Amazon Web Services(AWS), announced a custom chip design investment . The multi-year, multi-billion dollar deal covers Intel’s wafers and products. This move extends the two companies’ long-standing strategic cooperation, helping clients power practically any workload and improve AI applications.
AWS will receive an AI fabric chip from Intel made on the company’s most advanced process node, Intel 18A, as part of the expanded partnership. Expanding on their current collaboration whereby they manufacture Xeon Scalable processors for AWS, Intel will also create a customized Xeon 6 chip on Intel 3.
“As the CEO of AWS, Matt Garman stated that the company is dedicated to providing its customers with the most advanced and potent cloud infrastructure available.” Our relationship dates back to 2006 when we launched the first Amazon EC2 instance with their chips. Now, we are working together to co-develop next-generation AI fabric processors on Intel 18A. We can enable our joint customers to handle any workload and unlock new AI capabilities thanks to our ongoing partnership.
Through its increased cooperation, Intel and AWS reaffirm their dedication to growing Ohio’s AI ecosystem and driving semiconductor manufacturing in the United States. With its aspirations to establish state-of-the-art semiconductor production, Intel is committed to the New Albany region. AWS has invested $10.3 billion in Ohio since 2015; now, it plans to invest an additional $7.8 billion to expand its data center operations in Central Ohio.
In addition to supporting businesses of all sizes in reducing costs and complexity, enhancing security, speeding up business outcomes, and scaling to meet their present and future computing needs, Intel and AWS have been collaborating for more than 18 years to help organizations develop, build, and deploy their mission-critical workloads in the cloud. Moreover, Intel and AWS plan to investigate the possibility of producing additional designs based on Intel 18A and upcoming process nodes, such as Intel 18AP and Intel 14A, which are anticipated to be produced in Intel’s Ohio facilities, as well as the migration of current Intel designs to these platforms.
Forward-Looking Statements
This correspondence includes various predictions about what Intel anticipates from the parties’ co-investment framework, including claims about the framework’s timeliness, advantages, and effects on the parties’ business and strategy. These forward-looking statements are identified by terms like “expect,” “plan,” “intend,” and “will,” as well as by words that are similar to them and their variations.
These statements may result in a significant difference between its actual results and those stated or indicated in its forward-looking statements.
They are based on management’s estimates as of the date they were originally made and contain risks and uncertainties, many of which are outside of its control.
Among these risks and uncertainties are the possibility that the transactions covered by the framework won’t be executed at all or in a timely manner;
Failure to successfully develop, produce, or market goods under the framework;
Failure to reap anticipated benefits of the framework, notably financial ones;
Delays, hiccups, difficulties, or higher building expenses at Intel or manufacturing expansion of fabs, whether due to events within or outside of Intel’s control;
The complexities and uncertainties in developing and implementing new semiconductor products and manufacturing process technologies;
Implementing new business strategies and investing in new businesses and technologies;
Litigation or disputes related to the framework or otherwise;
Unanticipated costs may be incurred;
Potential adverse reactions or changes to commercial relationships including those with suppliers and customers resulting from the transaction’s announcement;
Macroeconomic factors, such as the overall state of the semiconductor industry’s economy;
Regulatory limitations, and the effect of competition products and pricing;
International conflict and other risks and uncertainties described in Intel’s Form 10-K and other filings with the SEC.
It warn readers not to rely unduly on these forward-looking statements because of these risks and uncertainties. The different disclosures made in the documents Intel occasionally files with the SEC that reveal risks and uncertainties that could affect its company are brought to the attention of readers, who are advised to analyze and weigh them carefully.
Read more on Govindhtech.com
#Intel#AWS#Intel18A#AmazonWebServices#AIapplications#AI#AmazonEC2instance#IntelandAWS#AIecosystem#news#technews#technology#technologynews#technologytrends#govindhtech
0 notes
Text
CORELIQUID I360: MAG CORELIQUID I Series Liquid Cooler
CORELIQUID I360
MSI introduced the MAG CORELIQUID I series AIO liquid cooler. Showcase your gaming skills for a chance to win a free MSI MAG CORELIQUID I360 to thank MSI fans for their support and passion for gaming and performance.
Effortless Assembly Endless Possibilities
The MSI ARSENAL GAMING (MAG) series gives gamers the edge to win the virtual battlefield. The MSI MAG series provides unstoppable defense in any gaming circumstance, inspired by military-grade equipment’s endurance and dependability.
But MAG is more than simply performance; it’s a design language that exudes toughness and grit, capturing the spirit of the MAG aesthetic. The MAG series is here to stand with you and build the strongest defense for gamers who are determined to rule their virtual battlefields.
MAG CORELIQUID
Concept of Design
The powerful, geometric cuts of diamonds serve as the model number one inspiration for MSI’s new MAG CORELIQUID I SERIES liquid cooler. Its design has a dual-sided transparent infinite mirror to complement its numerous angles. This gives do-it-yourselfers the opportunity to highlight the endless mirror’s craftsmanship from multiple perspectives, adding to its visual appeal and providing a multi-dimensional experience.
Horizontal Double-Sided Mirror
With its customizable dual-sided endless mirror design, it enables gamers to show off their handmade PC case in a distinctive way and convey their personality. By combining MSI’s unique control software with ARGB GEN2 lighting, players may customize their setup and create a unique aesthetic.
Sense-On Accessories
Gamers can easily install AMD and Intel components because to the well-defined part layout. In addition, a QR code on the packaging gives you fast access to simple installation instructions, guaranteeing that you can set up your equipment without difficulty. The installation accessory pack makes it simple to store the remaining pieces, which helps avoid problems that have previously occurred, such as missing or dropped screws after installation.
It is not only very effective and low-noise, but it also places the water pump differently than conventional water pumps on the radiator or tubing, minimizing vibration overall and producing the smoothest possible water flow. This design adds style to your gaming setup while also improving cooling efficiency!
Superior Performance Pump
The MAG CORELIQUID I series water pump is housed in the cold head and has a precisely designed motor for improved water cooling performance.
LDB Wheel
The MAG CORELIQUID I360 fans’ longevity is increased with LDB technology, which lowers wear and friction. Additionally, it reduces noise, boosts efficiency for better heat dissipation and circulation, and guarantees consistent performance in a range of scenarios.
AIRPORT PROOF SUPPORT TUBING
The nylon-braided EPDM water cooling tubes used in the MAG CORELIQUID I series provide resistance to corrosion, high and low temperatures, great flexibility, and anti-aging qualities. Not only does this design minimize coolant evaporation, but it also simplifies installation for players by eliminating the need to worry about bent or misaligned tubes.
MSI MAG CORELIQUID
BURN YOUR COMPUTER
Use the MSI Mystic Light, which offers 16.8 million colors and elegant LED effects, to add some color and vivid RGB lighting effects. With only one piece of software, MSI Mystic Light gives you total control over your PC’s RGB illumination.
Special Campaign: Test Your Game Skills!
Enter the lucky draw by playing their exclusive the game and posting your score on Facebook or Instagram with MSI hashtag from September 1 to October 15, 2024. By October 31, 2024, winners will be announced.
Mag Corelliquid I Series Easy Assembly, Endless Options
MSI’s newest CORELIQUID I SERIES liquid cooler’s dual-sided clear infinite mirror is inspired by diamond geometric cuts. DIY enthusiasts may show off the cooler’s artistry from numerous perspectives, making it look great in panoramic PC cases.
MSI adds DIY-Friendly features to MAG CORELIQUID I series to improve assembly. UNI Bracket, which supports Intel and AMD sockets, simplifies PC build for gamers. Once, users had to manage six cables. Users now only need to manage one cable thanks to easier cable organization and a cable cover. This keeps everything tidy and boosts assembly delight. Pre-installed fans reduce user assembly time.
Users get best performance or assembly fun with the MAG CORELIQUID I Series. It comes in black and white and 240mm and 360mm diameters.
MAG CORELIQUID I360 / WHITE
Dual-sided Infinite Mirror: Highlights the quality and distinctiveness of MSI’s products by showcasing them from various perspectives.
Fan Cable Management Cover: This improves appearance and neatens up the gaming setup by hiding the pre-installed fan connections.
LDB Bearing: Maintains a consistent fan operation by balancing longevity and noise, being dust-proof, and adapting to different situations.
Integrated AMD and Intel Installation Bracket: Reduces resource waste, protects the environment, and makes it simple for gamers to change their hardware.
Read more on govindhtech.com
#CORELIQUID#magcoreliquid#liquidcooler#coreliquidi360#msimagseries#AMD#Intel#rgblighting#pcgaming#pc#news#TechNews#technology#technologynews#technologytrends#govindhtech
0 notes
Text
Utilizing llama.cpp, LLMs can be executed on Intel GPUs
The open-source project known as llama.cpp is a lightweight LLM framework that is gaining greater and greater popularity. Given its performance and customisability, developers, scholars, and fans have formed a strong community around the project. Since its launch, GitHub has over 600 contributors, 52,000 stars, 1,500 releases, and 7,400 forks. More hardware, including Intel GPUs seen in server and consumer products, is now supported by llama.cpp as a result of recent code merges. Hardware support for GPUs from other vendors and CPUs (x86 and ARM) is now combined with Intel’s GPUs.
Georgi Gerganov designed the first implementation. The project is mostly instructional in nature and acts as the primary testing ground for new features being developed for the machine learning tensor library known as ggml library. Intel is making AI more accessible to a wider range of customers by enabling inference on a greater number of devices with its latest releases. Because Llama.cpp is built in C and has a number of other appealing qualities, it is quick.
16-bit float compatibility
Support for integer quantisation (four-, five-, eight-, etc.)
Absence of reliance on outside parties
There are no runtime memory allocations.
Intel GPU SYCL Backend
GGM offers a number of backends to accommodate and adjust for various hardware. Since oneAPI supports GPUs from multiple vendors, Intel decided to construct the SYCL backend using their direct programming language, SYCL, and high-performance BLAS library, oneMKL. A programming model called SYCL is designed to increase hardware accelerator productivity. It is an embedded, single-source language with a domain focus that is built entirely on C++17.
All Intel GPUs can be used with the SYCL backend. Intel has confirmed with:
Flex Series and Data Centre GPU Max from Intel
Discrete GPU Intel Arc
Intel Arc GPU integrated with the Intel Core Ultra CPU
In Intel Core CPUs from Generations 11 through 13: iGPU
Millions of consumer devices can now conduct inference on Llama since llama.cpp now supports Intel GPUs. The SYCL backend performs noticeably better on Intel GPUs than the OpenCL (CLBlast) backend. Additionally, it supports an increasing number of devices, including CPUs and future processors with AI accelerators. For information on using the SYCL backend, please refer to the llama.cpp tutorial.
Utilise the SYCL Backend to Run LLM on an Intel GPU
For SYCL, llama.cpp contains a comprehensive manual. Any Intel GPU that supports SYCL and oneAPI can run it. GPUs from the Flex Series and Intel Data Centre GPU Max can be used by server and cloud users. On their Intel Arc GPU or iGPU on Intel Core CPUs, client users can test it out. The 11th generation Core and later iGPUs have been tested by Intel. While it functions, the older iGPU performs poorly.
The memory is the only restriction. Shared memory on the host is used by the iGPU. Its own memory is used by the dGPU. For llama2-7b-Q4 models, Intel advise utilising an iGPU with 80+ EUs (11th Gen Core and above) and shared memory that is greater than 4.5 GB (total host memory is 16 GB and higher, and half memory could be assigned to iGPU).
Put in place the Intel GPU driver
There is support for Windows (WLS2) and Linux. Intel suggests Ubuntu 22.04 for Linux, and this version was utilised for testing and development.
Linux:sudo usermod -aG render username sudo usermod -aG video username sudo apt install clinfo sudo clinfo -l
Output (example):Platform #0: Intel(R) OpenCL Graphics -- Device #0: Intel(R) Arc(TM) A770 Graphics
orPlatform #0: Intel(R) OpenCL HD Graphics -- Device #0: Intel(R) Iris(R) Xe Graphics \[0x9a49\]
Set the oneAPI Runtime to ON
Install the Intel oneAPI Base Toolkit first in order to obtain oneMKL and the SYCL compiler. Turn on the oneAPI runtime next:
First, install the Intel oneAPI Base Toolkit to get the SYCL compiler and oneMKL. Next, enable the oneAPI runtime:
Linux: source /opt/intel/oneapi/setvars.sh
Windows: “C:\Program Files (x86)\Intel\oneAPI\setvars.bat\” intel64
Run sycl-ls to confirm that there are one or more Level Zero devices. Please confirm that at least one GPU is present, like [ext_oneapi_level_zero:gpu:0].
Build by one-click:
Linux: ./examples/sycl/build.sh
Windows: examples\sycl\win-build-sycl.bat
Note, the scripts above include the command to enable the oneAPI runtime.
Run an Example by One-Click
Download llama-2–7b.Q4_0.gguf and save to the models folder:
Linux: ./examples/sycl/run-llama2.sh
Windows: examples\sycl\win-run-llama2.bat
Note that the scripts above include the command to enable the oneAPI runtime. If the ID of your Level Zero GPU is not 0, please change the device ID in the script. To list the device ID:
Linux: ./build/bin/ls-sycl-device or ./build/bin/main
Windows: build\bin\ls-sycl-device.exe or build\bin\main.exe
Synopsis
All Intel GPUs are available to LLM developers and users via the SYCL backend included in llama.cpp. Kindly verify whether the Intel laptop, your gaming PC, or your cloud virtual machine have an iGPU, an Intel Arc GPU, or an Intel Data Centre GPU Max and Flex Series GPU. If so, llama.cpp’s wonderful LLM features on Intel GPUs are yours to enjoy. To add new features and optimise SYCL for Intel GPUs, Intel want developers to experiment and contribute to the backend. The oneAPI programming approach is a useful project to learn for cross-platform development.
Read more on Govindhtech.com
#intel#oneapi#onemkl#inteloneapi#llms#llamacpp#llama#intelgpu#govindhtech#cpu#sycl#news#technews#technology#technologynews#technoloy#ai#technologytrends
0 notes
Text
Intel Liftoff : 5 Design Thinking Stages for AI Startups 2024
Intel Liftoff
Intel Liftoff African Hackathon: 5 Steps of Design Thinking for AI Startups 2024. You are missing a crucial step in the process if you take any AI project without taking design thinking ideas into consideration. Understanding user needs is a key component of design thinking, which results in the creation of solutions that are natural, easy to use, and consistent with user expectations.
“Design Thinking Approach for AI Startups 2024,” a thought-provoking workshop led by Andrew Aryee from Innov8 Hub, was held during the most recent Intel Liftoff Hackathon for African AI Startups 2024. In this post, we explore the five main lessons that emerged from this discussion and lay the groundwork for an effective design thinking strategy for AI Startups 2024.
Both User-Centric Design and Empathy
A human-centered approach to AI makes ensuring that products are impactful and relevant, solving actual problems in meaningful ways, in addition to being technologically proficient. Through deliberate engagement with consumers, entrepreneurs can get insight into their everyday struggles and wants, leading to novel discoveries that drive innovation.
This implies that you may now rely on actual, human insights rather than just statistics and computers. This all-encompassing strategy guarantees that AI applications work both technically and emotionally for consumers.
Definition of the Problem and Assumption
Fixing a problem requires accurate identification. Test and challenge your assumptions often, especially early on. It’s important to aggressively challenge assumptions in order to prevent misconceptions and misdirection, which can often result from assumptions. Making the required user personas and figuring out how each one would profit from your service is a crucial step in this approach. Throughout the development phase, users will provide regular input to ensure that your understanding of the problem stays in line with their real needs and experiences.
Efficient Idea Prioritization and Brainstorming
No matter how strange they may seem, generate a wide range of ideas to ignite creativity. Recall that inventive ideas that would not arise from a more conservative approach can be produced through creativity. Utilize a variety of brainstorming strategies, together with the pertinent instruments at your disposal, to encourage creativity.
Workshops are also a great approach to promote an environment where everyone feels safe contributing. It’s simpler to priorities and make sure the best ideas are explored once you have a bank of them.
User Testing and Prototyping
The ideal course of action for developing an AI solution in its early stages is to work rapidly on creating a low-fidelity prototype, which is a basic, working version of your solution. The main goal of this prototype is to communicate the main idea and essential elements of your project; it doesn’t have to be flawless or completely functional. Getting it in front of actual users as quickly as feasible is the next stage.
Early customer input offers priceless information that can direct the development process and guarantee that your solution is headed in the right direction. After a thorough testing phase, you’ll go on to an iterative refinement process where you’ll priorities, make the necessary adjustments, and give users another chance to test. You can carry out this procedure as frequently as necessary to make sure you’re offering the best possible answer.
Design Process Iteration
From the initial ideation of a solution to the commercialization of a fully realized product, the process is nonlinear. By using an iterative strategy, it can be divided into more manageable, smaller stages. You can make constant improvements to your solution by implementing the four phases listed above.
Using this strategy also entails changing to meet the needs of the user. You can quickly adjust to changing market conditions and consumer input when you practice continuous improvement.
This raises the possibility of creating a product that is not only novel but also extremely pertinent and easy to use.
Setting user-centric design thinking as a top priority not only helps AI firms stand out, but it also fosters creativity that addresses pressing problems in society.
It maintains the user in the Centre of the arrangement.
Benefits of the Programme for AI Startups 2024
AI Startup
Startups may get the processing power they require to address their most pressing technological problems with Intel Liftoff. The programme also acts as a launchpad for partnerships, allowing entrepreneurs to improve customer service and strengthen each other’s offers.
Superior Technical Knowledge and Instruction
Slack platform access for the programme
Free online workshops and courses
Engineering advice and assistance
Reduced prices for certification and training
Invitations to forums and activities with experts
Advanced Technology and Research Resources
Cloud service providers are offering free cloud
Credits for the Intel Developer Cloud
Availability of Intel developer tools
which offer numerous technological advantages
Get access to Intel’s software portfolio
Opportunities for comarketing and networking
Which offers opportunities for co-marketing and networking with next-generation AI technology
Boost consumer awareness through Intel Venture’s marketing channels and exhibits at trade shows
Introductions at Intel around the ecosystem
Establish a connection with Intel Capital and the worldwide venture capital (VC) network
Read more on Govindhtech.com
#govindhtech#technology#technews#technologytrends#technologynews#news#AIstartup#AIStartups#intelliftoff#intel
0 notes