Tumgik
#Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
componentplanet · 4 years
Text
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
In April 2020, Intel announces new desktop processors as part of the 10th Gen Intel Core processor family, including Intel’s flagship Core i9-10900K processor, the world’s fastest gaming processor. (Credit: Intel Corporation)
Today, after no small amount of speculation as to its overall performance and power consumption, the Intel Core i9-10900K and associated rest of the 10th Generation desktop family are up for review. It’s a significant moment for Intel, given the dominant position AMD has seized in the desktop market as a whole.
In the three years since AMD launched its first-generation Ryzen CPUs, AMD and Intel have established a response pattern with each other. When AMD took the lead with the Ryzen 1800X, Intel responded with the Core i7-8700K — a six-core CPU with performance strong enough to take the overall performance crown back from the eight-core Ryzen 7 1800X.
Then, in 2018, we saw the 2700X take back the overall performance crown from the 8700K. “Not a problem,” said Intel, unleashing the Core i9-9900K, an eight-core CPU at a substantially higher price, but with some significant performance chops of its own. Then, last summer, AMD launched the new Ryzen 3000 family of CPUs on 7nm… and Intel held its fire. While the two companies tangled in the HEDT segment last year, with Intel slashing prices and AMD launching new 32-core CPUs, things on the ordinary consumer desktop front have been relatively quiescent.
Well. They have until now. This is where the new CPUs come in, at least in theory.
For most of the last year, AMD had has a lead on Intel in terms of power consumption (though this varies somewhat based on chipset), total number of CPU cores, performance per dollar, and, in many workloads, absolute performance. Intel’s long pause on 14nm has made it progressively harder for the company to compete against AMD’s advancing microarchitecture and process node transitions.
Gaming is one of the last major category wins under Intel’s belt, though the company has maintained a strong position in creative applications like Adobe Creative Cloud as well. AMD and Intel have been generally tied at 4K since 2017, provided that you use settings that actually tax a GPU, but at 1080p Intel has maintained a modest invention. AMD’s 7nm Ryzen cut into Intel’s 1080p performance leadership, and the 10900K’s high clock speed (5.3GHz) is an effort to regain some of that leadership.
A photo released on April 30, 2020, shows a die from a 10th Gen Intel Core processor. (Source: Intel Corporation)
The question for Intel, however, is whether or not the 10900K can still squeeze meaningful performance improvements out of its 14nm node. Back in 2018, Intel managed to defeat AMD’s 1800X with a CPU that was packing two fewer cores, but the situation has changed since then. The 3900X is going to be the major challenge for Intel’s Core i9-10900K, and while the 10900K will have the higher standing clock speed, it lacks the additional core/thread count.
Generally speaking, we’d expect the Ryzen CPUs to dominate in rendering and multi-threaded application tests, but Intel to continue to lead in terms of raw gaming performance. At the same time, it’s clear that physics will not allow Intel to continue to ramp clock speeds in this fashion. The company has taken to shaving 300 microns of material off the top of its CPUs in order to improve their thermal transfer characteristics. When Intel is lapping its own die to improve thermal transfer, the company is bumping up against the fundamental limits of its own manufacturing capabilities.
New Generation, New Platform
With the launch of 10th Gen on desktop comes the inevitable need to migrate to a different CPU socket. This time around, Intel and various OEMs are straight-up promising that Z490 boards will be upgradeable to future Intel chips with support for features like PCIe 4.0. If you believe the rumors, this is Rocket Lake — Intel’s next-generation CPU microarchitecture with backported features intended for 10nm before that process node got stuck.
Thus, you’ll see a lot of Z490 motherboards advertising features like PCIe 4.0 support this generation. That doesn’t mean that Intel is supporting PCIe 4.0 now — just that board vendors are already advertising capabilities you can’t even enable yet.
I can believe that Intel needed a new CPU socket for Comet Lake / Rocket Lake, if only because I genuinely don’t think the company ever remotely expected to pack 10 cores into its desktop socket on 14nm. At the same time, AMD has been offering the better overall upgrade path.
The majority of X370 motherboards and every X470 motherboard is capable of stepping from an eight-core Ryzen 1xxx or 2xxx CPU up to a Ryzen 3000. AMD has just announced that it will support Zen 3 on X470 and B450 motherboards, though the path to unlocking that support will require some effort and understanding of the process to traverse. AMD’s AM4 support has not been perfect — not every X370 or B350 motherboard got upgraded to support Ryzen 3000 — but it’s been stronger than what Intel has offered. This has been a historic strength of AMD’s platforms as a whole, but it faded during the Bulldozer era when there wasn’t really anything to upgrade to. With Ryzen now in-market for several years, this advantage has emerged again.
Due to Circumstances Beyond Our Control…
My own plans to present a full set of power consumption data between Ryzen and Intel have been foiled by the untimely death of a 1250W PSU I’d been using to standardize all of my power consumption tests.
Topping that off, all of our game benchmarks are unaccountably slow. Our testing consistently puts the Core i9-10900K behind the Core i9-9900KS, 9900K, and 9700K. According to Intel and other reviewers we reached out to, these results are atypical and unexpected. A brand-new UEFI from Gigabyte for our Z490 Master motherboard did not solve the problem.
As you read this, I’ll be busily engaged in one of two endeavors: Retesting a fresh OS install on this motherboard or Testing a fresh OS install on an Asus motherboard. Either way, I’ll have a full, updated suite of game benchmarks available as soon as possible.
Power consumption tests… I admit, I have to figure out what I’m doing about those. I don’t have comparative data on any of my currently-alive test PSUs (I’m using my second backup PSU, a 750W Antec).
Test Setup
All of our benchmarks were run on a Gigabyte Aorus Z490 Master motherboard with 32GB of Crucial DDR4-3600 RAM. Windows 1909 was installed with all patches and updates, alongside Nvidia driver 445.87 with a 1TB Samsung 970 EVO installed.
Non-Gaming Benchmark Results
Our non-gaming benchmark results are presented below. Gaming tests TBA.
In the Blender 1.0Beta2 benchmark, the 10900K establishes what will quickly become a pattern. While it offers a solid performance improvement over the Core i9-9900K, 10 cores of Skylake-era 14nm aren’t enough to match 12 7nm Ryzen CPU cores. Officially, the 3900X is a $500 CPU, but Amazon currently coincidentally has the CPU for $409.
I’ve combined the Cinebench results because they point in more or less the same direction. The Core i9-10900K ekes out roughly 5 percent more single-thread performance and improves substantially on the Core i9-9900K’s multi-threaded scores. The gains here are coming from more aggressive clocking as well as the 1.25x improvement in core count between the two chips.
But while the Core i9-10900K’s performance gains are solid, they don’t match the Ryzen 9 3900X’s overall performance. In both cases, AMD holds the lead.
Handbrake 1.22 is a mixed bag for the Core i9-10900K. On the plus side, its performance in H.264 when performing a 1080p30 Fast encode on the Tears of Steel 4K film is excellent, winning past the Ryzen 9 3900X. H.265 performance, however, is slower than anticipated.
This H.265 result was odd enough that I actually switched to Handbrake 1.32 and ran the same encode test again. In this case, the Core i9-10900K took 6.43 minutes to encode the H.264 sample — significantly slower than in 1.22 — but 6.3 minutes in H.265.
Overall, the performance improvement in H.264 with 1.22 is better than the performance gain to H.265 in 1.32, but I’ll likely re-run this test along with gaming in the AM. It feels as though the Z490 motherboard platform could have used a little more time to bake.
Corona Render is an Intel-friendly application, and the Core i9-10900K’s performance reflects this, with the 10-core CPU coming in just five seconds behind the 12-core 3900X. It’s one of the strongest showings for the 10900K, but it isn’t a win.
Our MSVC 2019 Qt compile test hands the Core i9-10900K our second win (if you’re feeling generous) of the day against the Ryzen 9 3900X:
While the Ryzen 9 3950X retains the overall performance lead, the Core i9-10900K’s 10 cores win the day over the Ryzen 9 3900X — by the barest whisker.
Gaming Benchmarks
To be added — but I’ve got no problem saying what I expect. I expect to see the Core i9-10900K beat its predecessors by a few frames per second at 1080p, but to match them at 4K, where game performance becomes GPU-bound. We test with an RTX 2080 instead of an RTX 2080 Ti, so our numbers are a bit more compressed than you might see with that card, but not to a degree that would make a difference (and an RTX 2080-equivalent GPU is not an unrealistic match for the Core i9-10900K).
Gaming is the highest-profile consumer category where Intel continues to command a performance lead, and it’s where the company has focused its CPU efforts. At the same time, the gap between Intel and AMD, even at 1080p, is modest at best. Gamers searching for the absolute highest frame rates will likely still play slightly faster on Intel systems, but the difference between the two is unlikely to be noticeable, even in competitive play.
Preliminary Conclusion: Skylake’s Swan Song
The Core i9-10900K is a step forward for Intel. At $488, it’s a considerably better buy than the Core i9-9900K, which was itself an excellent CPU. Its single-threaded performance is excellent and it’s capable of punching above its weight class on occasion. Skylake was an excellent CPU architecture in 2015 and it remains an excellent architecture in 2020.
And yet, for all these points — and for the first time, arguably, since Ryzen launched — Intel cannot claim to have reclaimed the overall pole position the way it could with the Core i7-8700K or Core i9-9900K when those parts debuted. I expect the Core i9-10900K to retain leadership in areas where Intel has been leading and to compete more effectively with the 3900X than its predecessor, but as far as matching or leading AMD’s 12-core CPU? On the whole, it doesn’t. And while neither Intel nor AMD have made promises about future motherboard support beyond the parts they plan to launch next, if you had to bet on which company would offer support for a wider range of CPU cores over a longer period of time, you’d bet on AMD.
The bottom line is this: The Core i9-10900K is a powerful, fast CPU, and an illustration of how little gas Skylake and Intel’s 14nm collectively have left in the tank. Rocket Lake, when and if it arrives, will supposedly give us new architectural improvements that may breathe some new life into the node, but the 10900K illustrates that Skylake has taken Intel as far as it can.
Comet Lake may paint a target on AMD’s Matisse, but it doesn’t topple its rival — and while it certainly improves Intel’s overall position, it doesn’t do so to the same degree as the 9900K and 8700K did when they arrived, relative to its smaller rival.
Now Read:
AMD’s Rumored 4700G APU Packs 8 Cores, Boosted Clock Speeds
Intel May Have Reserved Its Top-End 28W Ice Lake CPUs Exclusively For Apple
Intel Unveils Comet Lake Desktop CPUs: Up to 5.3GHz, 10C/20T, and Some Unanswered Questions
from ExtremeTechExtremeTech https://www.extremetech.com/computing/310771-intel-core-i9-10900k-review-comet-lake-paints-a-target-on-amds-matisse from Blogger http://componentplanet.blogspot.com/2020/05/intel-core-i9-10900k-cpu-review-comet.html
0 notes
lippyawards · 4 years
Text
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
In April 2020, Intel announces new desktop processors as part of the 10th Gen Intel Core processor family, including Intel’s flagship Core i9-10900K processor, the world’s fastest gaming processor. (Credit: Intel Corporation)
Today, after no small amount of speculation as to its overall performance and power consumption, the Intel Core i9-10900K and associated rest of the 10th…
View On WordPress
0 notes
componentplanet · 4 years
Text
MSI Launches a Water-Cooled Motherboard That Won’t Break the Bank
Back in early May, we discussed Asrock’s $1,100 motherboard and the fact that it didn’t seem to be designed for any actual group of users. Now, MSI has announced its own water-cooling product line, and it looks to be considerably more practical than what Asrock showed in May.
The MSI MPG Z490 Carbon EK X (MPG? Carbon?) features a custom water block built in partnership with the premium water block manufacturer EKWB. The cooler block is designed to cool both the CPUs and VRMs, which means it’ll fit this motherboard specifically but won’t work with others unless MSI goes to the trouble of guaranteeing backwards-and-forwards water block compatibility.
Feature-wise, the board supports what you’d expect from a product of its caliber, with USB 3.2 Gen 2×2, Wi-Fi 6, and 2.5G Ethernet support. The motherboard includes a leak-testing kit and RGB support done in MSI’s colors with support for 16.8 million colors and 29 effects. With this kind of color and effect support, you can keep it looking like a unicorn horked a rainbow into your office all year round.
Price: Reasonable!
The biggest reason I didn’t like Asrock’s shot at the water-cooled market is that it wasn’t clear who was actually supposed to buy an $1100 motherboard as a serious product. When fully custom kits from EKWB cost less than half that much, asking consumers to shuck out that kind of cash for a motherboard takes some guts, whether the product in question is “limited edition” or not.
Thankfully, MSI doesn’t follow Asrock’s lead in the pricing department. The MSI MPG Z490 Carbon EK X is expected to retail for just $399, and while that’s still quite high, it’s much closer to something enthusiasts might consider reasonable than $1,100 was.
Will This Trend Hold?
The most interesting thing about the fact that we’re seeing multiple companies pushing custom loop solutions as default on a motherboard is the implication that this could become more common in years ahead. AMD and Intel’s power consumption isn’t going down, and power demands are only increasing.
To put it another way: If Intel or AMD kicks a 500W CPU out the door and declares it’s for top-tier water-cooling enthusiasts, you’d see a spike in custom loop sales.
Nothing within realistic reach of a conventional PC enthusiast can trump custom loop water cooling with an internal or external reservoir. Single-stage freon cooling is much colder, of course, but not many people have the expertise to build one. EKWB, in contrast, sells prebuilt water cooling kits to take all the guesswork out of the equation.
I think the chances of this happening are still quite small, the PC industry has never collectively embraced water cooling, preferring to instead find ways to further extend air cooling performance. If absolute PC power consumption keeps increasing, they won’t have a choice. With that said, water delivers excellent results on the whole. When I reviewed the VisionTek CryoVenom back in 2014, I was astonished to see how well the GCN GPU responded to high clocks — from 949MHz to 1225MHz, at a maximum temperature of just 46 degrees Celsius. Water definitely has cooling potential that air can’t match, but whether we’ll see it catch on in the mainstream market? That’s a different story.
Now Read:
Asrock Announces $1,100 Water-Cooled Z490 Motherboard
Intel Doesn’t Want to Talk About Benchmarks Anymore
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
from ExtremeTechExtremeTech https://www.extremetech.com/computing/311333-msi-launches-a-water-cooled-motherboard-that-wont-break-the-bank from Blogger http://componentplanet.blogspot.com/2020/06/msi-launches-water-cooled-motherboard.html
0 notes
componentplanet · 4 years
Text
Intel Doesn’t Want to Talk About Benchmarks Anymore
At Computex last week, Intel CEO Bob Swan spoke about the overall state of the computer and technology industry and the impact that the pandemic has had on computing. He also called on the industry to move away from benchmarking and towards different metrics.
We should see this moment [the COVID-19 pandemic] as an opportunity to shift our focus as an industry from benchmarks to the benefits and impacts of the technology we create. The pandemic has underscored the need for technology to be purpose-built so it can meet these evolving business and consumer needs.
And this requires a customer-obsessed mindset to stay close, anticipate those needs, and develop solutions. In this mindset, the goal is to ensure we are optimizing for a stronger impact that will support and accelerate positive business and societal benefits around the globe.
Let’s talk about that.
Evaluating Swan’s Idea on the Merits
Before we talk about why Intel is making this argument now, let’s examine the argument itself, which I’ll word as follows: Technology coverage should focus on positive (or even transformative) use cases rather than emphasizing benchmark results.
The fact is, Swan has a point. I can publish five hundred tests showing relative performance between two laptops, but I can’t benchmark how well you’ll like the keyboard response, whether the edges of the machine will cut into your wrists or arms when you type on it, or if you’ll like the trackpad. I can tell you how light a notebook is, but I can’t tell you if it’s light enough to avoid triggering your carpal tunnel if you carry it around much of the day. The best I can do is to describe my own experiences and hope that the description will help guide your own purchase.
Benchmarks have another weakness: They don’t capture the happiness of having a new system that solves previously annoying problems. I can tell you that Laptop A gets 6 hours of battery life while Laptop B gets 12 hours, but that doesn’t say anything about the surge of satisfaction you might feel if your flight was hit by a four-hour delay and you still didn’t have to go rooting around in your bag for a charger or fight for space at one of the public charging stations.
There’s also a distinct satisfaction that comes from being able to run a game or application that ran poorly before that reviews don’t always capture. Reviewers tend to compare previous top-end hardware against current top-end hardware, effectively missing the user experience of someone who leaps from, say, a GTX 680 or Radeon 7970 to an RTX 2080 or Radeon 5700 XT.
As a hardware reviewer, I agree with Swan and always have. Benchmarks don’t capture the entire experience of using a product.
Of course, that’s why reviews have text in them in the first place, instead of just benchmarks. A review comprised solely of benchmarks would, in fact, have more results in it — I’d have more time to run them. The reason we don’t do that is because text is how we convey contextual information of exactly the sort Swan wants the industry to focus on.
Evaluating Swan’s Claim Against Intel’s Current Competitive Position
As much of a point as Swan has regarding the limitations of benchmarks, there’s no way to disentangle his comments from Intel’s current competitive position vis-à-vis AMD. Currently, it’s not that great. This isn’t a repeat of 2004-2005 when Intel was saddled with Prescott and Smithfield, but AMD has taken the overall lead in the CPU market for any use-case beyond gaming — and the gap in gaming is pretty marginal.
Bob Swan knows this, of course, which is at least part of why there’s a remark about how COVID-19 is a reason to shift away from benchmarking. The idea of emphasizing the benefits and impacts of technology on a work from home situation make good sense, given that WFH is very new to a large group of people — but it’s best deployed alongside test metrics, rather than in lieu of them.
I entered tech journalism just as the first websites were really getting started. At the time, there was a tremendous surge of excitement over leaving the tyranny of column inches behind. No more fighting with copy editors to get graphs made instead of charts. No more fighting for every scrap of space. You want 50 graphs in a story? Put 50 graphs in it.
And we did.
That didn’t happen by accident. There was a tremendous hunger in the market for this kind of coverage because techies and enthusiasts wanted to see more performance information about the products they were considering buying.
Benchmarking is how we catch cheaters. Benchmarking is how we find thermal problems. Benchmarking is how we discover that one laptop will burn your legs if you try to game on it, while another won’t. Benchmarking is how we discover if your laptop’s power-saving mode is working properly, or whether the battery life is actually an improvement over the system you’re using currently. Want to know if your smartphone got slower over time, or if a new Android or iOS version is slower? Benchmark it. Need to figure out which system component, peripheral, or device is causing high latency across your system during audio processing workloads? Benchmark it.
Benchmarking has its limits. Within the scope of those limitations, it is nothing short of a titanic force for good.
The last thing to be aware of is that this kind of concern for performance rankings is really, really common. Every time Intel, AMD, or Nvidia drops behind in performance rankings, we can expect to see a discussion of whether the benchmark practices and tests reviewers currently rely upon are actually capturing what they need to capture. Sometimes, these discussions lead to real and fruitful improvements to benchmarking as a whole, but they’re a common tactic for companies to call for.
Intel undoubtedly would like to de-emphasize benchmarking at the moment. Fortunately, it’s not going anywhere.
Now Read:
Some AI Advances in the Last Decade May Have Been Illusory
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
Intel May Have Reserved Its Top-End 28W Ice Lake CPUs Exclusively For Apple
from ExtremeTechExtremeTech https://www.extremetech.com/computing/311275-intel-doesnt-want-to-talk-about-benchmarks-anymore from Blogger http://componentplanet.blogspot.com/2020/06/intel-doesnt-want-to-talk-about.html
0 notes
componentplanet · 4 years
Text
How Intel Lost $10 Billion — and the Mobile Market
The Robert N. Boyce Building in Santa Clara, California, is the world headquarters for Intel Corporation. This photo is from Jan. 23, 2019. (Credit: Walden Kirsch/Intel Corporation)
Update (6/1/2020): We’ve republished the story below as a discussion of how Intel’s mobile efforts consumed so much of the company’s attention, yet ultimately came to naught. In the years since this story was published in 2016, Intel attempted to build a competitive 5G modem that would put it on the cutting edge of the new market. Unfortunately, that effort ultimately failed and Intel sold its 5G modem business to Apple. Atom has continued to develop as a low power mobile solution for entry-level Windows systems and Chromebooks.
The observations on foundry models are more salient now than they were in 2016. Back then, Intel’s 10nm was only a little late and the company was still considered to be on the cutting edge of semiconductor tech. TSMC currently occupies that position, though Intel wants to reclaim its crown by 5nm. Back in 2016, we occasionally heard analysts opine that Intel should sell its fabs and embrace the fabless model due to the increased competition from TSMC. Those whispers have only grown louder. I’m not saying I agree with them, but it’s interesting to see how this article frames the TSMC versus Intel fight. Back in 2016, things looked very different than they do today.
The Qualcomm antitrust lawsuit disclosures paint Intel’s failure to gain market share in a very different light, but I have more to say about that in Part 2 of this updated story.
Original Story Below:
Just over four years ago, we reviewed Intel’s first plausible smartphone, the Xolo 900. While it couldn’t match more powerful devices of the day, it was a solid initial effort. More importantly, the Medfield SoC that powered the Xolo 900 was meant to be a beachhead, a harbinger, a sign of things to come.
Four years and billions of dollars later, it’s now clear that the long-awaited ARM-versus-x86 war won’t be fought in the tablet or smartphone market. Intel has canceled all of its publicly announced 14nm smartphone SoCs and most of its 14nm tablet SoCs that would have shipped in Android devices. The company is diving into 5G research and hopes to lead the industry in developing 5G products. But its 3G and 4G modems are still built on the 28nm process node at TSMC, even as competitors like Qualcomm move to 14nm for their own products.
How did Intel manage to spend up to $10 billion and have so little to show for it? There’s no single, simple answer — but there are definitely some smoking guns. If you want to understand what went wrong for Intel, we need to talk about the semiconductor industry as a whole.
Of Fabs and Foundries
The semiconductor industry is dominated by four companies: GlobalFoundries, Intel, Samsung, and TSMC. Three of these companies — GlobalFoundries, Samsung, and TSMC — are known as pure-play foundries or merchant foundries. Their business model is based on manufacturing silicon for other companies, not designing and launching products that they sell themselves. (Samsung does build some custom silicon for itself, but the bulk of its foundry business comes from external customers).
Unlike the merchant foundries, Intel designs and builds both its fabrication plants and its microprocessors. Until the last few years, Intel never took foundry customers. Intel’s recent attempts to break into the merchant foundry business by attracting a handful of high-margin customers weren’t all that successful and have yet to generate significant revenue for the company.
The distinction between merchant foundries and IDMs (integrated device manufacturers) like Intel is a critical part of why Intel’s mobile efforts played out the way they did.
Intel’s manufacturing strategy historically relied on rapidly adopting new process technology. The bulk of the company’s revenue is derived from leading-edge nodes; older facilities were either upgraded or shut down as they became obsolete. The chart below shows Intel’s R&D expenses as a percentage of total sales as compared to Samsung and TSMC:
TSMC pursues a very different optimization strategy. While it also invests in leading-edge semiconductor technology, the bulk of TSMC’s revenue is earned on older technology nodes. The chart below is based on TSMC’s Q1 2015 results:
TSMC’s revenue by process node
As of Q1 2015, 39 percent of TSMC’s revenue was earned on technology nodes it deployed 10-20 years ago. 54 percent of its revenue came from nodes that were in volume production at least eight years ago. Intel made limited use of older facilities to build its chipsets, but its business model is fundamentally different.
For most of 30 years, that difference didn’t really matter. From the 1980s through 2010, Intel beat its low-volume RISC competitors and seized the data center by leveraging the economies of scale it created in the consumer PC market. It crushed its only serious competitor in the x86 market and forced AMD to sell its own fabs in an effort to survive.
Intel drove volume in new markets by exploiting economies of scale it had created elsewhere
The pure-play foundries and Intel worked in parallel tracks, often contending with some of the same problems, but prioritizing and solving them in different ways. Intel built its foundries to rigorous standards using a philosophy it called “Copy Exactly.” It prioritized high yields, focused almost exclusively on microprocessors, and enforced strict design rules. Intel products were designed to be built at Intel foundries with Intel tools and Intel’s established best practices.
In contrast, TSMC and the other merchant foundries designed their process nodes to meet the needs of many different clients. Their fabs prioritize throughput and flexibility while minimizing cost. Both models worked beautifully for their respective businesses — until their businesses started to collide.
The Limits of Technology
Intel’s failure to gain traction in the mobile market highlights the flaws in treating technological progress as a roadmap for corporate success. Despite recent delays and its own decision to abandon its tick-tock model, Intel still owns the most technologically advanced foundries in the world. It’s the only company to have deployed a true 14nm die shrink and it’ll be the first company to deploy a true 10nm node as opposed to a 10/14 or 10/16 hybrid. All of this is true — as is the fact that Intel spent $10 billion (according to Re/code) in mobile with nothing to show for it.
Technology sites, including ExtremeTech, have often discussed x86 versus ARM or AMD versus Nvidia strictly in terms of process node and roadmap. This works extremely well when discussing advances in battery life or performance, but much less well when applied to corporations who suddenly find themselves in direct competition for the first time in decades.
In Part 2 of this two-part series, we discuss how Intel’s business model and development priorities slowed and sabotaged its ability to compete effectively in the mobile world. We also discuss why the myriad ARM-versus-Atom debates failed to capture the real problems plaguing Chipzilla, and why the company saw such limited adoption of its products.
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
Intel May Have Reserved Its Top-End 28W Ice Lake CPUs Exclusively For Apple
Intel, TSMC Reportedly in Talks to Build New US Foundries
from ExtremeTechExtremeTech https://www.extremetech.com/extreme/227720-how-intel-lost-10-billion-and-the-mobile-market from Blogger http://componentplanet.blogspot.com/2020/06/how-intel-lost-10-billion-and-mobile.html
0 notes
componentplanet · 4 years
Text
AMD Unlikely to Jump to 5nm For Zen 3, Despite Rumors to the Contrary
There’s a rumor that’s popped up in the past several days concerning AMD’s long-term plans for 7nm and 5nm. According to this rumor, which began with a DigiTimes post now sealed behind a paywall, AMD is now considered a Tier 1 TSMC customer.
Supposedly, this newfound friendship between the two companies will result in AMD launching Zen 3 on 5nm to steal a march on Intel in a further extension of AMD’s overall market leadership. There are several reasons why this is unlikely.
First, there’s a significant lag time between when CPU designs are sent to the foundry for manufacturing (a process called taping out) and when they ship to customers. First, AMD sends the design to TSMC. Then they test the hardware TSMC sends back, and tweak the design as necessary. All of this takes several months, best-case. I don’t know where AMD is in the Zen 3 design process, but 5nm is going to have entirely different design rules than 7nm. There’s no way to quickly port from one to the other. Leaping ahead in this fashion isn’t done because the long lead times make it impossible.
Second, it’s not clear how much advantage 5nm offers to AMD in the first place. TSMC is predicting a 45 percent density advantage, which is great, but only up to 20 percent better power efficiency or 15 percent additional performance. Keep in mind, these are best-case scores, and to some extent, they are either/or.
I don’t want to imply in any way that AMD won’t have a 5nm chip — they’ve already got one on the roadmap — but it’ll have to balance the design carefully to improve performance. At the Zen 2 briefing, AMD’s engineers told us candidly they were surprised they were able to offer any frequency improvements at all at 7nm. This doesn’t bode well for clock scaling at 5nm. The Zen 4 team will be working on that problem already, given AMD’s described design cadence.
Third, AMD also doesn’t typically lead the way on foundry node transitions. Apple and Qualcomm occupy that role these days, and we’d expect the next-generation iPhone and Snapdragon parts to account for much of the 5nm capacity when the node launches.
If you want another example of how hard it is to backport features to a different process node, consider Intel. Skylake launched in 2015. If you believe the rumors, Rocket Lake is a 14nm chip with backported 10nm features launching later this year. It’ll be the first new CPU architecture from Intel in five years.
It didn’t take Intel 5 years to backport 10nm capabilities into a 14nm core, but the company was talking to us journalists about its efforts to make that kind of flexibility possible in 2018. Even if you assume they hadn’t even started the work yet (a poor assumption, in my opinion), it took two years to finish. Moving a CPU architecture between nodes is not a trivial undertaking.
Now Read:
AMD’s Upcoming Ryzen Refresh, Big Navi Debut Allegedly Leaked
AMD Will Support Zen 3, Ryzen 4000 CPUs on X470, B450 Motherboards
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
from ExtremeTechExtremeTech https://www.extremetech.com/computing/311185-amd-unlikely-to-jump-to-5nm-for-zen-3-despite-rumors-to-the-contrary from Blogger http://componentplanet.blogspot.com/2020/06/amd-unlikely-to-jump-to-5nm-for-zen-3.html
0 notes
componentplanet · 4 years
Text
AMD’s Upcoming Ryzen Refresh, Big Navi Debut Allegedly Leaked
Thus far, AMD has been quiet about its plans for any Ryzen refresh cycle in 2020, though the ongoing coronavirus pandemic has undoubtedly scrogged up some of the company’s plans. We’ve seen a pair of leaks surface online claiming to share details on what the firm has planned through the fall, though as always, take these leaks with salt.
First, let’s talk about CPUs. The report here is that AMD will add three Ryzen Refresh CPUs to its lineup, with a Ryzen 3900XT, Ryzen 7 3700XT, and a Ryzen 5 3600XT. These three chips would arrive with higher base and boost clocks and an estimated 1.05x – 1.10x performance increase over their predecessors. The branding here is divided — HotHardware reports that the chips might also increment the model number by 50 points (3950X, 3750X, etc). Either one of these is plausible, but if I had to guess, I’d guess AMD will either use the numbers or the numbers+letters. Differentiating your parts based solely on one digit (X versus XT) isn’t smart if you want consumers to be able to tell them apart and not buy the wrong chip for their own hardware. AMD also hasn’t used the “XT” moniker for CPUs before, so deploying them here would be a first for the company.
Meanwhile, over in graphics, AMD is said to be planning a Big Navi with up to 5,120 streaming compute units, a die of supposedly 505mm2, and 50 percent improved performance per watt. The 50 percent performance per watt uplift is something Lisa Su has spoken about before, so we know that part of the rumor is legitimate according to AMD’s guidance. The 505mm2 falls under the category of “things that could be true.” The 5700 XT was 251mm2, and Big Navi looks like it’s roughly the size of two smaller Navis, so that all lines up.
The specifics of the rumor, however, don’t make a ton of sense unless we assume a few things about AMD’s future product mixture. Supposedly there are three Navi chips coming — Navi 21, Navi 23, and a Navi 10 Refresh. Navi 21 is Big Navi, with up to 5,120 cores.
Supposedly we’ll get Navi 21 in four flavors:
Data and image by Hardware Leaks
The descriptions for the specific GPUs make no sense unless the “similar to” means “in the same relative position with vastly higher performance and more expensive price points.” AMD’s 80-CU RDNA2 isn’t going to be similar to the 5700 XT in price or performance unless something goes catastrophically wrong. We don’t know anything about Navi 23, except that the die is supposedly on par with the original Navi 10. This would imply that either Navi 23 is denser than Navi 10 or it offers significantly higher performance per square millimeter.
Squeezing four SKUs out of Navi21 would be unusual, so I’m not quite sure what to make of that. Typically, Nvidia and AMD use their highest-end GPU dies to power 1-2 cards, not four of them. Either way, Navi21 has to be intended for battle against Nvidia’s uppermost echelons, with the smaller Navi23 taking over in the spots where the 5700 XT and 5700 sit now. This would clear the way for refreshed Navi10 cards to take price cuts, likely pushing Polaris down to the lowest market tiers or out of the space altogether.
What doesn’t quite make sense about all of this is that it leaves AMD with a rather large number of SKUs. Nvidia’s current leading-edge lineup is the RTX 2080 Ti, followed by the 2080 Super, 2070 Super, 2060 Super, 1660 Super, and 1650 Super. This leak contemplates four high-end GPUs, three Navi23 GPUs, and three Navi10 cards. That’s considerably more SKUs than AMD has previously fielded.
As far as the CPU rumors go, I find them entirely believable. A 5-10 percent uplift for a Ryzen refresh cycle isn’t overwhelming, but it moves the ball forward a bit on the way to Zen 3, and it’s easy to believe that there was some headroom to be found in TSMC’s 7nm process after further refinement. I don’t expect any core count increases this year or in the near-term future — having just pushed the boundary above the point where Windows can easily take advantage of its thread counts, AMD is under no particular pressure to boost core counts again.
The GPU rumors really only cover code names, but it makes sense that AMD would hit Nvidia from top to bottom. The big unknown here is Ampere, and how much performance it will offer out of the gate. AMD could find itself sitting comfortably or see the rug yanked out from under its new intended competitor, and we really don’t know which to expect. Between the two families, CPUs are expected in-market first, with GPUs not launching until September, but both of those statements are themselves rumors and should be treated accordingly.
Now Read:
AMD Will Support Zen 3, Ryzen 4000 CPUs on X470, B450 Motherboards
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
AMD’s Rumored 4700G APU Packs 8 Cores, Boosted Clock Speeds
from ExtremeTechExtremeTech https://www.extremetech.com/computing/310908-amds-upcoming-ryzen-refresh-big-navi-debut-allegedly-leaked from Blogger http://componentplanet.blogspot.com/2020/05/amds-upcoming-ryzen-refresh-big-navi.html
0 notes
componentplanet · 4 years
Text
Why Modern Computers Struggle to Match the Input Latency of an Apple IIe
Update (5/25/2020): This article is several years old, but it’s one of my favorites and one of the most interesting topics we’ve talked about. There’s an old saying: “What hardware engineers create, software engineers take away.” That’s not the fairest way to look at the situation — modern computers can do far more than old ones — but the struggle to keep systems responding quickly while ramping up their complexity is not a series of unbroken triumphs. Even top-end PCs struggle to offer the latency of machines that offered a fraction of their performance.
Original story continues below:
Comparing the input latency of a modern PC with a system that’s 30-40 years old seems ridiculous on the face of it. Even if the computer on your desk or lap isn’t particularly new or very fast, it’s still clocked a thousand or more times faster than the cutting-edge technology of the 1980s, with multiple CPU cores, specialized decoder blocks, and support for video resolutions and detail levels on par with what science fiction of the era had dreamed up. In short, you’d think the comparison would be a one-sided blowout. In many cases, it is, but not with the winners you’d expect.
Engineer Dan Luu recently got curious about how various devices compare in terms of input latency. He carried a high-speed camera around to measure input lag on some of them because this is the sort of awesome thing engineers sometimes do. What he found is rather striking, as shown by the table below:
Data and chart by Dan Luu
The system with the lowest input latency — the amount of time between when you hit a key and that keystroke appears on the computer — is the Apple IIe, at 30ms. A respectable third-place goes to a Haswell-E system with a 165Hz monitor. #T refers to the number of transistors in each chip; the color-coding shows that chips with higher numbers of transistors tend to be in systems with more latency, and faster systems tend to be older than slower ones.
Improving monitor refresh rate clearly helps; the same Haswell-E rig has 90ms less input latency on a 165Hz display compared to a 24Hz display. If you’ve ever used a display with a 30Hz refresh rate, you’ve likely seen this; the difference between 30Hz and 60Hz is easily visible to the naked eye. But it clearly doesn’t make the entire difference in and of itself.
Luu has been doing an in-depth discussion of latency from several angles and we’d recommend his articles on keyboard and mouse latency if you want to follow up. In some cases, it’s literally impossible for a system to offer lower latency than an Apple IIe because the keyboard’s latency alone may be higher than the Apple system. Also, gaming keyboards may not be faster than normal keyboards, and even if they are, median keyboard latency is high enough that 3.5ms doesn’t improve the total input latency very much.
Why Modern Systems Struggle to Match Old Ones
This boils down to a single word: Complexity. For the purposes of this comparison, it doesn’t matter if you use macOS, Linux, or Windows. An Apple IIe with an open terminal window and nothing else is sitting there, waiting for input. Its keyboard is wired for an effective polling rate of 556Hz and a custom chip for keyboard input as opposed to polling the keyboard with a microcontroller. This video, from Microsoft’s Applied Sciences Group, discusses why low latency input is important.
youtube
An Apple IIe isn’t handling sophisticated multi-tasking commands. It isn’t juggling background threads, or dealing with multiple applications that aren’t designed to be aware (or careful) of one another. It isn’t polling a huge array of devices that range from audio and network controllers to discrete GPUs and storage. The Apple IIe OS doesn’t use a compositing window manager, which adds latency. This article, by Pavel Fatin, is an in-depth breakdown of latency processing and discusses how much delay each step in a modern system adds, from keyboard scan to final output.
I ran this test in SublimeText 2, not PowerShell, so don’t compare it against the results above. One thing it illustrates? Refresh rates really matter. The first two results are @60Hz, the third is at 24Hz.
It should also be noted that the speed of text input can vary from terminal to terminal. PowerShell is now the default terminal of Windows 10, and text input speed in PowerShell is… bad. I write my stories by default in SublimeText, which has little-to-no observable lag. PowerShell, in contrast, is so laggy, you can perceive a gap between what you’re typing and when it appears (although not a particularly large one).
Either way, this article is an interesting example of how, despite myriad advances, low-latency input remains challenging. Complexity is often a very good thing, but we pay a performance penalty for it.
Now Read:
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
From 4.3GHz All-Core Overclocking to SMT Scaling: A Comprehensive Review of the AMD Threadripper 3990X
How Does Windows Use Multiple CPU Cores?
from ExtremeTechExtremeTech https://www.extremetech.com/computing/261148-modern-computers-struggle-match-input-latency-apple-iie from Blogger http://componentplanet.blogspot.com/2020/05/why-modern-computers-struggle-to-match.html
0 notes
componentplanet · 4 years
Text
Killer Move: Intel Acquires Rivet Networks
Rivet Networks, the company behind Killer Networking products, has been acquired by Intel for an undisclosed sum. Rivet Networks began life as Bigfoot Networks, with a dramatic “Killer NIC” card that sold for $250 before pivoting to building software solutions to prioritize and classify traffic. The company was acquired by (and spun back off from) Qualcomm, re-emerged as Rivet Networks, and as of yesterday, is now an Intel property.
Since it re-emerged from Qualcomm, Rivet has focused on building relationships with both motherboard and laptop OEMs. The company has shipped its own custom-branded solutions with underlying hardware built by Qualcomm, Realtek, and Intel at various points in time. It’s also offered features you don’t generally find elsewhere, like the option to use wired and wireless ethernet simultaneously, or to route traffic through specific network interfaces.
Over time, Rivet has been picking up more network partners and shipping hardware on a wider range of motherboards, including a partnership with Dell on the XPS product family. Overall, the company’s profile has been rising since the spinoff, and the acquisition today is the logical outgrowth of that trend.
Intel and Rivet Networks partnered to build the Killer 1650X.
So what does Intel plan to do with this acquisition? That’s less clear. The blog post announcing the deal refers to the broad surge in networking traffic that’s happened over the past few months — a subtle nod to the ongoing impact of COVID-19, without actually naming the pandemic. There are no specific references to any projects between the two companies, however, beyond a statement that Intel will continue to license Rivet Networks software to customers. Rivet worked closely with Intel to develop its solutions around the AX201 and Killer AC-1535, so we should likely expect further developments around these products and, presumably, some additional goodies in the future.
As for what this means for the future of PC networking? That really depends on which aspects of the business Intel chooses to emphasize. The recent pandemic has at least temporarily turbocharged the work-from-home community, driving new hardware purchases and efforts to outfit home offices for long-term use. As such, Intel might want Killer for its traffic shaping and prioritizing tech in a business context. Alternately, it could plan to continue to develop the software across the entire spectrum of business and consumer uses.
Ultimately, we read this as a move to boost its networking credentials at a time when home Wi-Fi performance is likely to be top-of-mind for your average consumer than it might be otherwise. Intel has often marketed its own Wi-Fi solutions as specific reasons to buy an Intel laptop, going back to at least the Centrino platform. From that perspective, the company’s decision to buy an enthusiast-oriented network developer makes perfect sense.
Now Read:
Intel Core i9-10900K CPU Review: Comet Lake Paints a Target on AMD’s Matisse
Intel May Have Reserved Its Top-End 28W Ice Lake CPUs Exclusively For Apple
Intel, TSMC Reportedly in Talks to Build New US Foundries
from ExtremeTechExtremeTech https://www.extremetech.com/computing/310843-killer-move-intel-acquires-rivet-networks from Blogger http://componentplanet.blogspot.com/2020/05/killer-move-intel-acquires-rivet.html
0 notes