#CUDA_cores
Explore tagged Tumblr posts
Link
Jensen Huang expressed commitment to the Chinese market Nvidia CEO Jensen Huang believes that the United States is still far from independence in the semiconductor manufacturing market. [caption id="attachment_85069" align="aligncenter" width="780"] Nvidia[/caption] According to Huang, it will be another 10 to 20 years before the United States stops being dependent on Taiwanese and Chinese chip makers. Of course, this is provided that all these years the country’s course will be aimed at getting rid of this very dependence. The head of Nvidia thinks so The United States has been working for quite some time to open chip production factories within the country, but it cannot be said that very many of these factories have already been built there. Huang also noted that, despite US sanctions, Nvidia will continue to work with the Chinese market, as it is simply the largest. Let us remind you that yesterday it became known that instead of the banned RTX 4090, the company will create GeForce RTX 4090 D (Dragon) especially for China .
#Artificial_Intelligence.#computer_hardware#CUDA_cores#data_centers#deep_learning#Gaming#Gaming_Graphics#GPU#GPU_architecture#graphics_cards#graphics_processing_unit#graphics_technology#nvidia#nvidia_geforce#Nvidia_Quadro#Nvidia_Titan#technology_company
0 notes
Photo
Nvidia Tesla V100 32GB Clocks and Power Scaling | 450W Tesla V100 offers 16% more performance for 80% higher TDP vs 250W Tesla V100, 10% more performance for 30% higher TDP vs 350W Tesla V100
Credit to /u/juangra for the info, from a DM with him That is one hell of a V/F curve. He pegs the 450W V100 at 1421MHz. Assuming clocks scale TFlops linearly, we are looking at <10% perf increase for ~30% more TDP per card. 250W V100 has a base clock of 1230MHz, so we have 16% perf increase, 80% increase in TDP
The way we arrive at these numbers is a bit convuluted.
https://www.anandtech.com/show/13623/nvidia-unveils-dgx2h-server
Nvidia has released Tflops for DGX2 vs DGX2H
This is deduced from Nvidia Quoted Tflops: Tensor_cores * 64 * 2 * F, CUDA_cores * 2 * F, Add CUDA + Tensor FLOPS and solve for F using the total performance 2.1 PFLOPS, 1474560 * F = 2100000,1424MHz, 1421MHz if one considers the performance of the Xeons
That comparison for 350W vs 450W is correct, but the comparison against the 250W may be flawed because we are looking at it in a very different way.
We are taking the calculated clocks and comparing them to the listed base clocks from Nvidia on the 250W variant, 1230MHz.
This was very interesting to me because it shows that Nvidia is hitting the VF curve very hard, and they must push perf higher, while they wait for 7nm to be mature enough for them to beat the V100.
0 notes
Link
In monetary terms Nvidia is today's largest semiconductor manufacturer based on revenue and profit. We already talked about this the other day, and now another visual graph has appeared demonstrating the rapid success of the company. [caption id="attachment_84806" align="aligncenter" width="780"] Nvidia[/caption] This is a graph of the revenue of data center chip companies. As you can see, Nvidia became the first here too, with a huge gap from its competitors, and the turning point occurred only at the beginning of this year. Nvidia is turning into a monster. Before this, for several years (actually much longer, this data is just not on the graph), the leader was Intel, which accounted for 45% to 60% of revenue or more. At the same time, Intel's share gradually decreased, while the shares of Nvidia and AMD, on the contrary, grew. However, in 2023, due to the AI boom, Nvidia's share rose sharply, while the shares of AMD and Intel fell sharply. This does not mean that companies began to earn less - AMD is actively increasing sales of server CPUs, but Nvidia's growth is so great that the shares of AMD and Intel have actually decreased. [caption id="attachment_84807" align="aligncenter" width="780"] Nvidia[/caption] Today we already said that in the last quarter alone, Nvidia sold about half a million H100 and A100 accelerators and sales will only grow in the near future.
#Artificial_Intelligence.#computer_hardware#CUDA_cores#data_centers#deep_learning#Gaming#Gaming_Graphics#GPU#GPU_architecture#graphics_cards#graphics_processing_unit#graphics_technology#nvidia#nvidia_geforce#Nvidia_Quadro#Nvidia_Titan#technology_company
0 notes
Link
The largest customers bought 150,000 H100 adapters each In recent months, we've seen a lot of news about how Nvidia is selling its AI accelerators left and right, especially the most powerful H100. But only now has it become known how many of these adapters were actually sold. [caption id="attachment_84778" align="aligncenter" width="780"] Nvidia[/caption] According to Omdia, Nvidia managed to sell almost half a million H100 and A100 accelerators in the third quarter alone. There is no data on the sales ratio of these models, but probably the main emphasis now is on the H100. Nvidia sells half a million H100 and A100 accelerators per quarter In the current quarter, analysts expect that Nvidia will be able to sell more than half a million of these accelerators. If you look at the chart only for the H100, you can see that by the end of the entire current year, Nvidia is expected to supply about 650,000 of these adapters to its largest customers alone. [caption id="attachment_84779" align="aligncenter" width="780"] Nvidia[/caption] Moreover, the demand is so great that now the waiting time for deliveries reaches 52 weeks, that is, a whole year! You can also see that the largest buyers of the H100 are Microsoft and Meta*, each of which purchased approximately three times as many accelerators as its closest competitors such as Google, Amazon, Oracle and Tencent. * The Meta company is recognized as extremist in Russia and is banned.
#Artificial_Intelligence.#computer_hardware#CUDA_cores#data_centers#deep_learning#Gaming#Gaming_Graphics#GPU#GPU_architecture#graphics_cards#graphics_processing_unit#graphics_technology#nvidia#nvidia_geforce#Nvidia_Quadro#Nvidia_Titan#technology_company
0 notes