#AIPCs
Explore tagged Tumblr posts
homepageexplorer · 3 months ago
Text
DELL US Home Page History on January 9, 2025!
youtube
0 notes
govindhtech · 5 months ago
Text
Dell AI PCs: A Gateway To AI For Life Sciences Organizations
Tumblr media
AI in the Life Sciences: A Useful Method Using Computers.
For life sciences companies wishing to experiment with AI before making a full commitment, Dell AI PCs are perfect. The Dell AI PCs are revolutionary way to get started in the vast field of artificial intelligence, particularly for clients in the life sciences who are searching for a cost-effective way to create intricate processes.
The Dell AI PCs, GPU-enhanced servers, and cutting-edge storage solutions are essential to the AI revolution. If you approach the process strategically, it may be surprisingly easy to begin your AI journey.
Navigating the Unmarked Path of AI Transformation
The lack of a clear path is both an exciting and difficult part of the AI transition in the medical sciences. As it learn more about the actual effects of generative and extractive AI models on crucial domains like drug development, clinical trials, and industrial processes, the discipline continues to realize its enormous promise.
It is evident from discussions with both up-and-coming entrepreneurs and seasoned industry titans in the global life sciences sector that there are a variety of approaches to launching novel treatments, each with a distinct implementation strategy.
A well-thought-out AI strategy may help any firm, especially if it prioritizes improving operational efficiency, addressing regulatory expectations from organizations like the FDA and EMA, and speeding up discovery.
Cataloguing possible use cases and setting clear priorities are usually the initial steps. But according to a client, after just two months of appointing a new head of AI, they were confronted with more than 200 “prioritized” use cases.
When the CFO always inquires about the return on investment (ROI) for each one, this poses a serious problem. The answer must show observable increases in operational effectiveness, distinct income streams, or improved compliance clarity. A pragmatic strategy to evaluating AI models and confirming their worth is necessary for large-scale AI deployment in order to guarantee that the investment produces quantifiable returns.
The Dell AI PC: Your Strategic Advantage
Presenting the Dell AI PCs, the perfect option for businesses wishing to experiment with AI before committing to hundreds of use cases. AI PCs and robust open-source software allow resources in any department to investigate and improve use cases without incurring large costs.
Each possible AI project is made clearer by beginning with a limited number of Dell AI PCs and allocating skilled resources to these endeavors. Trials on smaller datasets provide a low-risk introduction to the field of artificial intelligence and aid in the prediction of possible results. This method guarantees that investments are focused on the most promising paths while also offering insightful information about what works.
Building a Sustainable AI Framework
Internally classifying and prioritizing use cases is essential when starting this AI journey. Pay close attention to data kinds, availability, preferences for production vs consumption, and choices for the sale or retention of results. Although the process may be started by IT departments, using IT-savvy individuals from other departments to develop AI models may be very helpful since they have personal experience with the difficulties and data complexities involved.
As a team, it is possible to rapidly discover areas worth more effort by regularly assessing and prioritizing use case development, turning conjecture into assurance. The team can now confidently deliver data-driven findings that demonstrate the observable advantages of your AI activities when the CFO asks about ROI.
The Rational Path to AI Investment
Investing in AI is essential, but these choices should be based on location, cost, and the final outcomes of your research. Organizations may make logical decisions about data center or hyperscaler hosting, resource allocation, and data ownership by using AI PCs for early development.
This goes beyond only being a theoretical framework. This strategy works, as shown by Northwestern Medicine’s organic success story. It have effectively used AI technology to improve patient care and expedite intricate operations, illustrating the practical advantages of using AI strategically.
Read more on Govindhtech.com
3 notes · View notes
phonemantra-blog · 10 months ago
Link
Get ready for a revolution in PC performance and AI capabilities. At Computex 2024, AMD unveiled its groundbreaking Zen 5 architecture, powering the next generation of Ryzen processors. This exciting lineup includes the all-new Ryzen 9000 series for desktop PCs and the 3rd generation Ryzen AI processors for ultrabooks. Computex 2024 A New Era of Desktop Processing: The Ryzen 9000 Series AMD has taken the crown for the most advanced desktop processors with the Ryzen 9000 series. Built on the AM5 platform, these processors boast cutting-edge features like PCIe 5.0 and DDR5 support. They also deliver a significant 16% improvement in instructions per core (IPC) compared to their Zen 4 predecessors. Here's a closer look at the specs of the Ryzen 9000 family: Flagship Performance: The Ryzen 9 9950X reigns supreme with 16 cores, 32 threads, and a blazing-fast clock speed reaching up to 5.7 GHz. This powerhouse surpasses the competition in graphics bandwidth and AI acceleration, translating to impressive performance gains in creative applications like Blender (up to 56% faster) and high frame rates in demanding games (up to 23% improvement). Multiple Options: The Ryzen 9000 series caters to diverse needs with the Ryzen 9 9900X, Ryzen 7 9700X, and Ryzen 5 9600X processors. All models boast impressive core counts, thread counts, and clock speeds, ensuring smooth performance for gamers, content creators, and professionals alike. Availability: Gear up for an upgrade! The Ryzen 9000 series is slated for release in July 2024. Ryzen AI 300: Unleashing On-Device AI Power for Next-Gen Laptops The future of AI-powered computing is here with the Ryzen AI 300 series. Designed for ultrabooks, these processors integrate a powerful dedicated Neural Processing Unit (NPU) capable of delivering a staggering 50 trillion operations per second (TOPs). This translates to impressive on-device AI experiences, including: Real-time translation: Break down language barriers effortlessly with real-time translation powered by the NPU. Live captioning: Never miss a beat with live captioning that keeps you in the loop during meetings or lectures. Co-creation: Unleash your creativity with AI-assisted tools that enhance your workflow. The Ryzen AI 300 series comes in two variants: Ryzen AI 9 HX 370: This flagship model boasts the full power of the NPU with 50 TOPs and 16 compute units, ideal for demanding AI workloads. Ryzen AI 9 365: Offering exceptional value, this processor delivers 40 TOPs of AI performance with 10 CPU cores, catering to a wide range of AI applications. Look forward to experiencing the power of Ryzen AI 300 in upcoming Copilot+ PCs and AI+ PCs starting July 2024. Frequently Asked Questions Q: When will the Ryzen 9000 series and Ryzen AI 300 processors be available? A: Both processor lines are expected to hit the market in July 2024. Q: What are the key benefits of the Ryzen 9000 series? A: The Ryzen 9000 series offers significant advantages, including: Increased performance with a 16% IPC improvement over Zen 4 processors. Support for cutting-edge technologies like PCIe 5.0 and DDR5. A wide range of processor options for various needs and budgets. Q: What kind of AI experiences can I expect with the Ryzen AI 300 series? A: The Ryzen AI 300 series unlocks a new level of on-device AI capabilities, including: Real-time language translation. Live captioning for videos and meetings. AI-powered co-creation tools for enhanced creativity. Q: Which laptops will feature the Ryzen AI 300 processors? A: Look for the Ryzen AI 300 series in upcoming Copilot+ PCs and AI+ PCs from various manufacturers.
0 notes
govindhtech · 7 months ago
Text
How The AI Inferencing Circuitry Powers Intelligent Machines
Tumblr media
AI Inferencing
Expand the capabilities of PCs and pave the way for future AI applications that will be much more advanced.
AI PCs
The debut of “AI PCs” has resulted in a deluge of news and marketing during the last several months. The enthusiasm and buzz around these new AI PCs is undeniable. Finding clear-cut, doable advice on how to fully capitalize on their advantages as a client, however, may often seem like searching through a haystack. It’s time to close this knowledge gap and provide people the tools they need to fully use this innovative technology.
All-inclusive Guide
At Dell Technologies, their goal is to offer a thorough manual that will close the knowledge gap regarding AI PCs, the capabilities of hardware for accelerating AI, such as GPUs and neural processing units (NPUs), and the developing software ecosystem that makes use of these devices.
All PCs can, in fact, process AI features; but, newer CPUs are not as efficient or perform as well as before due to the advent of specialist AI processing circuits. As a result, they can do difficult AI tasks more quickly and with less energy. This PC technological breakthrough opens the door to AI application advances.
In addition, independent software vendors (ISVs) are producing cutting-edge GenAI-powered software and fast integrating AI-based features and functionality to current software.
It’s critical for consumers to understand if new software features are handled locally on your PC or on the cloud in order to maximize the benefits of this new hardware and software. By having this knowledge, companies can be confident they’re getting the most out of their technological investments.
Quick AI Functions
Microsoft Copilot is an example of something that is clear. Currently, Microsoft Copilot’s AI capabilities are handled in the Microsoft cloud, enabling any PC to benefit from its time- and productivity-saving features. In contrast, Microsoft is providing Copilot+ with distinctive, incremental AI capabilities that can only be processed locally on a Copilot+ AI PC, which is characterized, among other things, by a more potent NPU. Later, more on it.
Remember that even before AI PCs with NPUs were introduced, ISVs were chasing locally accelerated AI capabilities. In 2018, NVIDIA released the RTX GPU line, which included Tensor Cores, specialized AI acceleration hardware. As NVIDIA RTX GPUs gained popularity in these areas, graphics-specific ISV apps, such as games, professional video, 3D animation, CAD, and design software, started experimenting with incorporating GPU-processed AI capabilities.
AI workstations with RTX GPUs quickly became the perfect sandbox environment for data scientists looking to get started with machine learning and GenAI applications. This allowed them to experiment with private data behind their corporate firewall and realized better cost predictability than virtual compute environments in the cloud where the meter is always running.
Processing AI
All of these GPU-powered AI use cases prioritize speed above energy economy, often involving workstation users using professional NVIDIA RTX GPUs. NPUs provide a new feature for using AI features to the market with their energy-efficient AI processing.
For clients to profit, ISVs must put in the laborious code required to support any or all of the processing domains NPU, GPU, or cloud. Certain functions may only work with the NPU, while others might only work with the GPU and others might only be accessible online. Gaining the most out of your AI processing gear is dependent on your understanding of the ISV programs you use on a daily basis.
A few key characteristics that impact processing speed, workflow compatibility, and energy efficiency characterize AI acceleration hardware.
Neural Processing Unit NPU
Now let’s talk about NPUs. NPUs, which are relatively new to the AI processing industry, often resemble a section of the circuitry found in a PC CPU. Integrated NPUs, or neural processing units, are a characteristic of the most recent CPUs from Qualcomm and Intel. This circuitry promotes AI inferencing, which is the usage of AI characteristics. Integer arithmetic is at the core of the AI inferencing technology. When it comes to the integer arithmetic required for AI inferencing, NPUs thrive.
They are perfect for using AI on laptops, where battery life is crucial for portability, since they can do inferencing with very little energy use. While NPUs are often found as circuitry inside the newest generation of CPUs, they can also be purchased separately and perform a similar purpose of accelerating AI inferencing. Discrete NPUs are also making an appearance on the market in the form of M.2 or PCIe add-in cards.
ISVs are only now starting to deliver software upgrades or versions with AI capabilities backing them, given that NPUs have just recently been introduced to the market. NPUs allow intriguing new possibilities today, and it’s anticipated that the number of ISV features and applications will increase quickly.
Integrated and Discrete from NVIDIA GPUs
NVIDIA RTX GPUs may be purchased as PCIe add-in cards for PCs and workstations or as a separate chip for laptops. They lack NPUs’ energy economy, but they provide a wider spectrum of AI performance and more use case capability. Metrics comparing the AI performance of NPUs and GPUs will be included later in this piece. However, GPUs provide more scalable AI processing performance for sophisticated workflows than NPUs do because of their variety and the flexibility to add many cards to desktop, tower, and rack workstations.
Another advantage of NVIDIA RTX GPUs is that they may be trained and developed into GenAI large language models (LLMs), in addition to being excellent in integer arithmetic and inferencing. This is a consequence of their wide support in the tool chains and libraries often used by data scientists and AI software developers, as well as their acceleration of floating-point computations.
Bringing It to Life for Your Company
Trillions of operations per second, or TOPS, are often used to quantify AI performance. TOPS is a metric that quantifies the maximum possible performance of AI inferencing, taking into account the processor’s design and frequency. It is important to distinguish this metric from TFLOPs, which stands for a computer system’s capacity to execute one trillion floating-point computations per second.
The broad range of AI inferencing scalability across Dell’s AI workstations and PCs. It also shows how adding more RTX GPUs to desktop and tower AI workstations may extend inferencing capability much further. To show which AI workstation models are most suited for AI development and training operations, a light blue overlay has been introduced. Remember that while TOPS is a relative performance indicator, the particular program running in that environment will determine real performance.
To fully use the hardware capacity, the particular application or AI feature must also support the relevant processing domain. In systems with a CPU, NPU, and RTX GPU for optimal performance, it could be feasible for a single application to route AI processing across all available AI hardware as ISVs continue to enhance their apps.
VRAM
TOPS is not the only crucial component for managing AI. Furthermore crucial is memory, particularly for GenAI LLMs. The amount of memory that is available for LLMs might vary greatly, depending on how they are managed. They make use of some RAM memory in the system when using integrated NPUs, such as those found in Qualcomm Snapdragon and Intel Core Ultra CPUs. In light of this, it makes sense to get the most RAM that you can afford for an AI PC, since this will help with general computing, graphics work, and multitasking between apps in addition to the AI processing that is the subject of this article.
Separate For both mobile and stationary AI workstations, NVIDIA RTX GPUs have dedicated memory for each model, varying somewhat in TOPS performance and memory quantities. AI workstations can scale for the most advanced inferencing workflows thanks to VRAM memory capacities of up to 48GB, as demonstrated by the RTX 6000 Ada, and the ability accommodate 4 GPUs in the Precision 7960 Tower for 192GB VRAM.
Additionally, these workstations offer a high-performance AI model development and training sandbox for customers who might not be ready for the even greater scalability found in the Dell PowerEdge GPU AI server range. Similar to system RAM with the NPU, RTX GPU VRAM is shared for GPU-accelerated computation, graphics, and AI processing; multitasking applications will place even more strain on it. Aim to purchase AI workstations with the greatest GPU (and VRAM) within your budget if you often multitask with programs that take use of GPU acceleration.
The potential of AI workstations and PCs may be better understood and unwrapped with a little bit of knowledge. You can do more with AI features these days than only take advantage of time-saving efficiency and the capacity to create a wide range of creative material. AI features are quickly spreading across all software applications, whether they are in-house custom-developed solutions or commercial packaged software. Optimizing the setup of your AI workstations and PCs can help you get the most out of these experiences.
Read more on Govindhtech.com
0 notes
govindhtech · 10 months ago
Text
Intel Neural Compressor Joins ONNX in Open Source for AI
Tumblr media
Intel Neural Compressor
In addition to popular model compression techniques like quantization, distillation, pruning (sparsity), and neural architecture search on popular frameworks like TensorFlow, PyTorch, ONNX Runtime, and MXNet, Intel Neural Compressor also aims to provide Intel extensions like Intel Extension for the PyTorch and Intel Extension for TensorFlow. Specifically, the tool offers the following main functions, common examples, and open collaborations:
Limited testing is done for AMD, ARM, and NVidia GPUs via ONNX Runtime; substantial testing is done for a wide range of Intel hardware, including Intel Xeon Scalable Processors, Intel Xeon CPU Max Series, Intel Data Centre GPU Flex Series, and Intel Data Centre GPU Max Series.
Utilising zero-code optimisation solutions, validate well-known LLMs like LLama2, Falcon, GPT-J, Bloom, and OPT as well as over 10,000 wide models like ResNet50, BERT-Large, and Stable Diffusion from well-known model hubs like Hugging Face, Torch Vision, and ONNX Model Zoo. Automatic accuracy-driven quantization techniques and neural coding.
Work together with open AI ecosystems like Hugging Face, PyTorch, ONNX, ONNX Runtime, and Lightning AI; cloud marketplaces like Google Cloud Platform, Amazon Web Services, and Azure; software platforms like Alibaba Cloud, Tencent TACO, and Microsoft Olive.
AI models
AI-enhanced apps will be the standard in the era of the AI PC, and developers are gradually substituting AI models for conventional code fragments. This rapidly developing trend is opening up new and fascinating user experiences, improving productivity, giving creators new tools, and facilitating fluid and organic collaboration experiences.
With the combination of CPU, GPU (Graphics Processing Unit), and NPU (Neural Processing Unit), AI PCs are offering the fundamental computing blocks to enable various AI experiences in order to meet the computing need for these models. But in order to give users the best possible experience with AI PCs and all of these computational engines, developers must condense these AI models, which is a difficult task. With the aim of addressing this issue, Intel is pleased to declare that it has embraced the open-source community and released the Neural Compressor tool under the ONNX project.
ONNX
An open ecosystem called Open Neural Network Exchange (ONNX) gives AI developers the freedom to select the appropriate tools as their projects advance. An open source format for AI models both deep learning and conventional ML is offered by ONNX. It provides definitions for standard data types, built-in operators, and an extendable computation graph model. At the moment, Intel concentrates on the skills required for inferencing, or scoring.
Widely supported, ONNX is present in a variety of hardware, tools, and frameworks. Facilitating communication between disparate frameworks and optimising the process from experimentation to manufacturing contribute to the AI community’s increased rate of invention. Intel extends an invitation to the community to work with us to advance ONNX.
How does a Neural Compressor Work?
With the help of Intel Neural Compressor, Neural Compressor seeks to offer widely used model compression approaches. Designed to optimise neural network models described in the Open Neural Network Exchange (ONNX) standard, it is a straightforward yet intelligent tool. ONNX models, the industry-leading open standard for AI model representation, enable smooth interchange across many platforms and frameworks. Now, Intel elevates ONNX to a new level with the Neural Compressor.
Neural Compressor
With a focus on ONNX model quantization, Neural Compressor seeks to offer well-liked model compression approaches including SmoothQuant and weight-only quantization via ONNX Runtime, which it inherits from Intel Neural Compressor. Specifically, the tool offers the following main functions, common examples, and open collaborations:
Support a large variety of Intel hardware, including AIPC and Intel Xeon Scalable Processors.
Utilising automatic accuracy-driven quantization techniques, validate well-known LLMs like LLama2 and wide models like BERT-base and ResNet50 from well-known model hubs like Hugging Face and ONNX Model Zoo.
Work together with open AI ecosystems Hugging Face, ONNX, and ONNX Runtime, as well as software platforms like Microsoft Olive.
Why Is It Important?
Efficiency grows increasingly important as AI begins to seep into people’s daily lives. Making the most of your hardware resources is essential whether you’re developing computer vision apps, natural language processors, or recommendation engines. How does the Neural Compressor accomplish this?
Minimising Model Footprint
Smaller models translate into quicker deployment, lower memory usage, and faster inference times. These qualities are essential for maintaining performance when executing your AI-powered application on the AI PC. Smaller models result in lower latency, greater throughput, and less data transfer all of which save money in server and cloud environments.
Quicker Inference
The Neural Compressor quantizes parameters, eliminates superfluous connections, and optimises model weights. With AI acceleration features like those built into Intel Core Ultra CPUs (Intel DLBoost), GPUs (Intel XMX), and NPUs (Intel AI Boost), this leads to lightning-fast inference.
AI PC Developer Benefits
Quicker Prototyping
Model compression and quantization are challenging! Through developer-friendly APIs, Neural Compressor enables developers to swiftly iterate on model architectures and effortlessly use cutting-edge quantization approaches such as 4-bit weight-only quantization and SmoothQuant.
Better User Experience
Your AI-driven apps will react quickly and please consumers with smooth interactions.
Simple deployment using models that comply with ONNX, providing native Windows API support for deployment on CPU, GPU, and NPU right out of the box.
What Comes Next?
Intel Neural Compressor Github
Intel looks forward to working with the developer community as part of the ONNX initiative and enhancing synergies in the ONNX ecosystem.
Read more on Govindhtech.com
1 note · View note
govindhtech · 1 year ago
Text
Unleash the Power of AI: Introducing AMD Ryzen AI Processors
Tumblr media
Large Language Models (LLMs) are not limited to programmers and developers; they are intended for all users. AMD has released simple-to-follow instructions for utilising LM Studio to run cutting-edge big language models on AI PCs with AMD Ryzen AI or AMD Radeon 7000 Series graphics cards, all without the need for coding knowledge. Today, they will compare the AMD Ryzen AI x86 platform options with those of its rival and see how well the two perform in practical applications.
The Neural Processing Unit (NPU) of the AMD Ryzen Mobile 7040 Series and AMD Ryzen Mobile 8040 Series processors is specifically developed to manage tasks related to growing artificial intelligence. The NPU’s 16 TOPs enable the user to run AI tasks as power-efficiently as possible. To find out more about AMD Ryzen AI PCs, watch this video.
AMD Ryzen AI Laptop
For instance, an AMD Ryzen AI-equipped laptop costs $899, whereas a competing x86 device costs $999. With a 120Hz frame rate, 2.8k resolution, and an OLED IMAX upgraded screen, the AMD AI PC is more affordable. The rival SKU is limited to a regular IPS panel with a 60 Hz frame rate and a 1.2k screen. Along with having twice the SSD storage, the AMD laptop has a lower TDP of 15W compared to the competition’s significantly higher TDP of 28W.
What about performance, then? Large language models may be deployed and used using LM Studio, one of the most widely used apps for consumers, and research has shown that the AMD AI PC performs better.
The AMD Ryzen 7 7840U 15W CPU outperforms the competition with a specimen sample prompt, achieving up to 17% quicker tokens per second than the widely used Mistral 7b model. In Llama v2 Chat 7b, the AMD Ryzen AI processor also delivers an average 79% quicker time-to-first-token. For LLMs operated in a daily environment, AMD suggests a 4-bit K M quantization; for jobs requiring the highest level of precision, such as coding, a 5-bit K M quantization is advised.
Additionally included is a discussion of how their performance (measured in tokens-per-second and time-to-first-token) stacks up against rivals at different quantization settings. It should be noted that AMD does not advise using Q8 or Q2 quantizations due to the former’s extreme slowness and the latter’s significant confusion loss. This aligns with suggestions made by colleagues in the sector.
They also evaluated the Llama v2 Chat 7b model, and the time-to-first-token and tokens-per-second metrics showed comparable outcomes:
Large language models may greatly boost productivity, and you can now execute them entirely locally thanks to Ryzen AI.
AMD is dedicated to developing AI and bringing forward its widespread advantages. AMD’s AI PCs make it possible for everyone to benefit from the expansion of AI consumer applications. Users may choose from a variety of x86 platforms. AMD Ryzen AI laptops, on the other hand, are not only very affordable, but they also offer customers a leading value proposition due to their ability to run consumer LLM programmes like LM Studio at next-level performance while using half the TDP.
Future AI PCs Improve with AMD
Users are the owners of the key to unlocking amazing AI experiences with 2nd generation AI PCs powered by AMD Ryzen AI.
AMD Ryzen AI Transforms AI Computers for All
With an AI PC from AMD, you can experience the power of personal computing at your fingertips. This opens up new possibilities for productivity, teamwork, and creativity, enabling you to keep more in touch with the outside world.
AI Resources for All Trades
Give yourself the tools to build the future as a developer with a fresh AI application, a creator with captivating content, or a business owner with the means to streamline processes and increase productivity at work.
Daily Digital Encounters, Elevated
Boost your relationship with Windows Studio Effects’ AI-powered capabilities and get better visuals thanks to its AI-generated graphics.
AI for Privacy of Data
Discover the fascinating ways artificial intelligence (AI) may improve your daily life while preserving the accessibility of your personal information.
AMD Ryzen AI engine
With AMD Ryzen AI support, Microsoft Windows Studio Effects is supported. AI Technology from AMD Ryzen that adapts to your needs the AMD Ryzen AI engine provides a way for consumers to be prepared for the workloads and applications in AI that are developing daily at an incredible rate.
AMD Ryzen AI Software
The tools and runtime libraries for optimising and implementing AI inference on AMD Ryzen AI-powered PCs are included in the AMD Ryzen AI Software. The neural processing unit (NPU) included in the AMD XDNA architecture the first specifically designed AI processing silicon on a Windows x86 processor may be used by apps using Ryzen AI software. With the ONNX Runtime and Vitis AI Execution Provider (EP), developers can now create and implement models learned in PyTorch or TensorFlow and execute them directly on laptops powered by Ryzen AI.
While using a CPU or GPU alone to run AI models may quickly deplete battery life, Ryzen AI laptops allow AI models to run on the inbuilt NPU, freeing up CPU and GPU resources for other computational activities. As a result, developers can execute concurrent apps and private, on-device LLM AI tasks effectively and locally, while also greatly extending battery life. Additionally, developers can start constructing their apps in minutes and fully use Ryzen AI’s AI acceleration capabilities thanks to Hugging Face’s expanding pre-trained model zoo, which supports a broad range of models, and its simple one-click installation method.
Ryzen AI AMD, Flow of Development
There are three simple steps to creating AI apps using Ryzen AI:
Train
A model in PyTorch, TensorFlow, or ONNX models is chosen or developed by the user, and it is then trained on the cloud.
Quantize
Several quantization processes are supported by Ryzen AI software development. Alternatively, developers may utilise Microsoft Olive with the Vitis AI quantizer as a plug-in, or they can use the AMD Vitis AI quantizer to quantize the model into INT8 and store it in ONNX format.
Implement
Using Ryzen AI hardware, ONNX Runtime Vitis AI EP efficiently divides, assembles, and runs the quantized ONNX models, optimising workloads to guarantee peak performance while using minimal power.
Read more on govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Intel Adds PCs AI Software and Hardware Developer Program
Tumblr media
Intel AI pcs PCs AI Software As part of the AI PC Acceleration Program, Intel Corporation today announced the launch of two new artificial intelligence (AI) initiatives: the AI PC Developer Program and the inclusion of independent hardware manufacturers. These are significant turning points in Intel’s journey to empower the ecosystem of hardware and software to optimize and maximize AI on over 100 million Intel-based AI PCs by 2025.
The AI PC Developer Program is intended primarily to provide a seamless development experience and facilitate the large-scale adoption of innovative AI technologies by independent software suppliers (ISVs) and software developers. It gives users access to development kits that feature the newest Intel hardware, which includes the Intel Core Ultra CPU, as well as tools, processes, and frameworks for AI implementation.
Developers now have easy access to AI PC and client-focused toolkits, documentation, and training via the new developer resource website. The purpose of these compiled materials is to assist developers in optimizing AI and machine learning (ML) application performance and accelerating new use cases by fully using Intel Core Ultra CPU technology.
Developers who want to more about Intel’s worldwide partner network and how it is maximizing AI performance in the PC market should sign up for Intel’s AI PC Acceleration Program.
Independent hardware vendors (IHVs) now have the chance to get their hardware ready, optimized, and enabled for Intel AI PCs thanks to their inclusion in the AI PC Acceleration Program. Partners who meet the requirements may visit Intel’s Open Labs, where they can get co-engineering and technical assistance early on in the process of developing hardware solutions and platforms. Furthermore, Intel makes reference hardware available via this initiative to eligible IHV partners so they may test and enhance their technology in order to ensure optimal performance at launch.
The AI PC Accelerator Program has now onboarded 150 hardware providers worldwide, according to Matt King, senior director of Intel’s Client Hardware Ecosystem. “They can’t wait to expand their cutting-edge software and hardware solutions and share this momentum with their large, open developer community.”
The AI Acceleration Program for IHVs is open to developers and IHVs. In order to develop and elevate the AI PC experience to new heights, Intel is collaborating with its hardware partners. Come along with Intel as we accelerate innovation.
Why It Matters: AI will radically alter a wide range of facets of human existence, including creation, learning, employment, and relationships. By using Intel’s cutting-edge platform’s central processing units, neural processing units, and graphics processing units together with optimized software and hardware, anybody may take advantage of artificial intelligence with an AI PC. Intel works with a wide range of partners in an open ecosystem to provide improved performance, productivity, innovation, and creativity for end users. Intel is enabling ISVs and IHVs while spearheading innovations in the AI PC era.
Intel provides developers with extra value via various initiatives, such as:
Enhanced Compatibility: Developers can make sure their applications and software operate seamlessly on the newest Intel processors by having access to the most recent Intel Core Ultra development kits, optimization tools, and software. This improves compatibility and the overall end-user experience.
Performance Optimization: Software may be made more efficient and perform better if it is optimized for certain hardware architectures early in the development cycle. Better performance will be possible if AI PCs are broadly accessible thanks to this.
Global Scale and Increased Market Opportunities: Working with Intel and its large, open network of AI-enabled partners offers chances to grow your business internationally, penetrate new markets, and succeed in a variety of sectors.
With Intel Core Ultra processors spanning 230 designs from 12 worldwide original equipment manufacturers, Intel is bringing over 300 AI-accelerated capabilities to market by 2024 and provides a broad range of toolkits for AI developers to use.
About the AI PC Acceleration Program: Launched in October 2023, the program’s goal is to link independent software and hardware providers with Intel resources, such as training, co-engineering, software optimization, hardware, design resources, technical know-how, co-marketing, and sales opportunities.
PC Acceleration Program for AI Through the AI PC Acceleration Program, Intel will make artificial intelligence (AI) toolchains, training, co-engineering, software optimization, hardware, design resources, technical expertise, co-marketing, and sales opportunities available to independent hardware vendors (IHVs) and independent software vendors (ISVs).
Use Intel Core Ultra Processors on your PC to experience the power of AI. You might be able to increase your creativity, productivity, and security with the AI PC. We’re transferring AI apps from the cloud to PCs in response to market trends, enhancing privacy and lowering reliance on pricey data centers. Intel simplifies AI software development so you can concentrate on what really matters.
FAQS What is the AI PC Developer Program? A project by Intel to support AI technology research and adoption for personal computers. It goes after independent hardware vendors (IHVs), independent software developers, and ISVs.
Read more on Govindhtech.com
1 note · View note
govindhtech · 1 year ago
Text
Laptops get LLMs: NVIDIA and HP CEOs admire AI PCs
Tumblr media
LLMs with NVIDIA Jensen Huang and Enrique Lores spoke about how the newest mobile workstations can speed up and customize generative AI in a fireside discussion. In a fireside talk today, the CEOs of NVIDIA and HP announced new laptops that can create, test, and execute big language models, indicating that 2024 will be the year generative AI goes personal.
At HP Amplify, an event in Las Vegas attended by over 1,500 resellers and distributors, NVIDIA founder and CEO Jensen Huang said, “This is a renaissance of the personal computer.” “These new workstations are going to transform the job of artists, designers, and data scientists.”
Prior to the release of what his business advertised as “the industry’s largest portfolio of AI PCs and workstations,” HP’s Enrique Lores said that artificial intelligence (AI) was the greatest development to hit the PC in decades.
Increased Security and Speed
Lores said in a keynote address at the event that the new systems would provide enhanced speed and security while lowering expenses and energy as compared to operating their AI work on the cloud.
A variety of mobile AI workstations powered by NVIDIA RTX Ada Generation GPUs are available with the latest HP Z Books.
When paired with an NVIDIA RTX 500 Ada Generation Laptop GPU, entry-level computers let users to execute generative AI tools and applications on the move.
The RTX 5000 is packed in high-end versions to provide up to 682 TOPS, allowing them to develop and operate local LLMs. They do this by connecting to their information using retrieval-augmented generation (RAG) to produce results that are private and individualized.
Availability of Accelerated Software The new workstations have access to NVIDIA’s full-stack AI platform, which includes tools for accelerating the data science required for generative AI.
The Z by HP AI Studio platform for the systems, created in association with NVIDIA, has ties to NVIDIA NGC, a library of GPU-accelerated AI and data science applications. NVIDIA NeMo, a framework for creating, modifying, and implementing generative AI models, is included with NGC.
Furthermore, HP and NVIDIA said that NVIDIA CUDA-X libraries would be included into the systems to accelerate the data processing and preparation necessary for generative artificial intelligence.
Boosting Data Scientist Efficiency NVIDIA RAPIDS cuDF, which speeds up pandas—software used by over 10 million data scientists—is one of the libraries.
Huang said that they could now analyze data in minutes as opposed to hours or even days in the past.
He went on, “This pandas library is insanely complex,” pointing out that NVIDIA developers spent more than five years rewriting the code to make it GPU-accelerated.
Starting a New Chapter
HP also unveiled a partner training program created in collaboration with NVIDIA in addition to the new systems. It will enable computer suppliers to suggest to clients the best AI goods and services to suit their requirements.
These initiatives prepare the industry for the new age of AI, which allows software to build software.
“We’ve made a whole new computer.” Software writing has been reinvented, and software use also has to be reinvented, according to Huang. “The future lies in large language models integrated with other LLMs to help solve application problems.”
What advantages do these new workstations offer? Quicker handling of generative AI tools and applications. The capacity to build and execute LLMs locally for individualized and private outcomes. Access to NVIDIA’s full-stack AI platform, which includes workflow-accelerating tools for data science. Integration with NVIDIA NGC, a library of GPU-accelerated artificial intelligence and data science applications.
Which are the new HP Z Book workstations’ salient characteristics? Generative AI tools may be executed on-the-go thanks to NVIDIA RTX Ada Generation GPUs. Complex operations such as building and executing LLMs using retrieval-augmented generation (RAG) may be handled by high-end models. The Z by HP AI Studio platform links to NVIDIA NGC to provide access to potent AI applications.
What effect will this have on data scientists? The systems’ integration of NVIDIA CUDA-X libraries will speed up the processing and preparation of data. When compared to conventional approaches, libraries like NVIDIA RAPIDS cuDF will dramatically shorten data processing times.
Read more on Govindhtech.com
0 notes
govindhtech · 1 year ago
Text
Microsoft’s Swift AI PCs with Intel Core Ultra Processors!
Tumblr media
Microsoft Launches Swift Navigate AI PCs with Intel Core Ultra Processors
Acer included Intel Core Ultra CPUs with Intel’s first NPU and AI PCs acceleration to its Swift range of thin and light notebooks. The new Swift laptops’ strong processor and AI-supported capabilities make them more performance-minded, competent, and intuitive for content creation, schooling, productivity, and play.
“Immediately following unveiling her first generation of Intel Core Ultra laptops last month, They are introducing even more Swift products to help more customers enjoy premium laptop experiences and AI PCs-supported technology for more exciting and effective AI PCs use,” said Acer’s General Manager, Notebooks, IT Products Business, James Lin. “Plus, these laptops feature impressive updates that help customers do more – and do them even better.”
Through his strong technological partnership with Acer, they are developing beyond the CPU for power efficiency, graphics, and AI. AI PCs like the Acer Swift Go and Swift X 14 use Intel Core Ultra CPUs and an all-new NPU to allow AI on client. Intel Senior Vice President and General Manager of the Client Business Group Jim Johnson said he was delighted for clients to experience the improved collaboration, productivity, and creativity of these AI PCs.
Latest Tech: Acer Swift Go Laptops with OLED Displays, Wi-Fi 7, and AI
Acer Swift Go 16 (SFG16-72) and Swift Go 14 (SFG14-73) are Intel Evo platform laptops with new Intel Core Ultra processors and Intel Arc built-in GPUs, providing premium performance and all-day battery life of up to 12.5 hours for the 14-inch laptop and 10.5 hours for the 16-inch laptop.
Other AI-supported features increase user experiences and creativity, making Swift Go laptops more appealing. Built for the future generation of computing, Windows users may use Copilot with one click of the laptop’s unique Copilot key to maximize work, creation, and playtime with AI.
Both AI PCs include a 1440p QHD camera with TNR and Acer PurifiedView’s AI PCs conferencing capabilities including Background Blur, Automatic Framing, and Eye Contact for better video. The three microphones and Acer PurifiedVoice 2.0 technology with AI noise reduction record crisp, clear audio and decrease background noise and sounds beyond the speaker on the camera. Intel Wi-Fi 7 speeds up internet connections by 2.4x over Wi-Fi 6E and keeps laptops connected when it counts.
Both laptops have the Swift line’s thin-and-light aluminum design, which opens to 180 degrees for convenient collaboration. Some Swift Go 14 versions include a multi-control lighting touchpad for immediate media instructions and an OceanGlass touchpad for smooth, productive scrolling. For mouse-free scrolling and on-the-go lifestyles, both touchpads provide 44% more scrolling area. The Swift Go versions also include Intel Unison 2.0 for quick and effortless laptop-to-Android or iOS smartphone connections.
The new Swift Go devices include beautiful, color-rich OLED screens with 500-nit peak brightness, 100% DCI-P3 color gamut, and DisplayHDR True Black 500 certification, like their predecessors. Graphics are attractive as well as detailed when viewing the Swift Go 16’person 16-inch 3.2K OLED display with 3200 x 2000 resolution and 120 Hz refresh rate, as well as the Swift Go 14’s 14-inch 2.8K display with 2880 x 1800 performance and 90 Hz refresh rate.
Both versions have TÜV Rheinland Eyesafe display certified to decrease eye strain. WUXGA touchscreen screens allow touch and pen input, which is useful for note-taking and drawing.
Speed Go laptops are now able to take 32 GB LPDDR5X RAM and a 2 TB PCIe Gen 4 SSD with two slots. Lastly, the electronics include HDMI 2.1, twin USB Type-C connections having Thunderbolt 4 for rapid charging, an internal USB Type-A connectors for internet charging, with a microSD card reader. The portable computers encompass Bluetooth Advanced Communication for better wireless devices audio.
Acer Swift X 14 AI PC with Calman-Verified Display for Students and Creators
The recently launched Acer that Swift X 14 (SFX14-72G) uses the most modern Intel CPU Core Ultra, the most recent NVIDIA GeForce RTX 40 Series Laptop GPUs, as well and a Calman-verified 2.8K OLED the monitor to empower a broad spectrum of users.
Quality laptops including latest Intel Core Ultra H-Series CPUs and upwards to three NVIDIA GeForce RTX 4070 Laptop GPUs allow quicker AI-enhanced workflows for better live streaming, video editing, and 3D rendering. AI PCs ray reconstruction in NVIDIA DLSS 3.5 improves lighting effects and resolution in graphics-intensive games and apps. Additionally, the Swift X 14 is NVIDIA Studio-validated, optimized with pre-installed NVIDIA Studio Drivers, and supports Copilot in Windows (with a dedicated Copilot key) that uses AI to help users navigate their devices and apps.
The Calman-verified 14.5-inch 2.8K OLED panel delivers accurate color with Delta E<2 in animation, pictures, and video, enabling designers to bring their ideas to life. The 100% DCI-P3 color spectrum, high-contrast rating, and 500-nit peak brightness with VESA DisplayHDR TrueBlack 500 certification provide the OLED panel vivid colors and clarity. The display’s Acer Light Sensing technology dramatically adjusts color temperature and brightness to lighting situations. Finally, the 120 Hz frame refresh rate delivers seamless, high-quality television and entertainment.
Swift X 14 is suitable for work and school productivity as well as content production. A FHD webcam with AI-enabled Temporal Noise Reduction (TNR), Acer PurifiedView and PurifiedVoice 2.0 with AI Noise Reduction technology, and three microphones for improved audio capture and clarity make conferencing seamless and effective.
Dual USB Type-C ports, an HDMI 2.1 port, a MicroSD card reader, Wi-Fi 6E, and Bluetooth LE Audio keep devices connected, while a large fan, dual D6 copper heat pipes, and a dedicated air inlet keyboard keep the laptop cool. Up to 32 GB LPDDR5X RAM and 1 TB PCIe Gen 4 SSD are also available in the Swift X 14.
Read more on Govindhtech.com
0 notes