Nvidia

Nvidia just beat its own record for fastest AI GPU ever, with the new A100

  • Twitter
  • Facebook
  • Google+
  • Pinterest

Nvidia currently owns the greatest market share when it comes to graphics. The company, known for its powerful GPUs (Graphics Processing Units), became the largest seller of PC GPU’s. That means beating out AMD, its chief competitor, over the years (though that may end soon). In the world of high-level gaming, Nvidia is indisputably the very best when it comes to raw performance. However, one overlooked aspect of the company is its huge role in science and technology.

That’s right, the company that made raytracing the benchmark for modern AAA games also powers computer science. Entire supercomputers, running at extremely high processing speeds, owe thanks to Nvidia’s custom GPU’s. The consumer GPU aspect is just one face of the company’s expertise. Even now, after setting the world record for fastest GPU with the A100, with its whopping 40GB of VRAM, Nvidia couldn’t rest. Their new release beats even that.

Nvidia just unveiled its A100 80GB version, making that the fastest GPU on the market

With 40GB of VRAM, the original A100 GPU powered the world’s fastest supercomputer. It certainly deserved recognition for being able to render and process at nearly 2TB of data per second. That jaw-dropping data rate was virtually unprecedented, and a big step forward for advancing technology. Now, keep in mind we are literally dealing with a plug-in data center, as Nvidia advertises, rather than the smaller scale of gaming. This GPU announcement isn’t as paltry as the RTX 2080 Super over the RTX 2080, for example. This jump resembles the transition from the RTX 20- series to the 30- series.

The new A100 GPU comes with double the VRAM, at 80GB. What’s more, the single GPU unit itself comes in either 4 or 8 GPU configurations. That means you can go from 320 GB of memory all the way up to 640 GB. Interestingly, Nvidia also took the opportunity to explain its offerings and availability plans. They will offer the DGX A100 and the DGX Station A100 in the 4 or 8 unit configurations respectively.

 Several major brands revealed their upcoming implementations of the HGX A100 model

At the conference, Nvidia was not the only major name heard. Companies like Lenovo, HP, Dell, Atos, Gigabyte, Fujitsu and others confirmed their partnerships. What that means for these tech companies is that they should also retail their own custom supercomputer options using these GPU’s. That should translate to a better, more convenient purchase for target audiences who just need an ultra-powerful setup for their huge data memory requirements.

Naturally, Nvidia chose not to make its pre-existing A100 GPU hardware obsolete. After all, the 40 GB VRAM limit might still prove adequate for the less resource-hungry fields of work. In fact, the company offers a discount on upgrades from the 40GB VRAM to the 80GB version, here.

The applications of this technology go beyond speediness; extremely complex and massive workloads now have a solution

To the average person, who isn’t advancing the realm of STEM through cutting-edge lab experiments, this GPU will be unnecessary. However, this GPU offers endless possibilities for emergent sciences like AI Engineering. After all, as the company boasts, the A100 offers thrice the power of traditional DLRM technology. This means training advanced text-using AI on artificial language learning, especially on GPT-2 servers, has never been easier.

Furthermore, the GPU is capable of Multi-Instance GPU (MIG) functioning for better efficiency. Basically, this means the GPU can allocate up to 10GB of resources for a particular task, all at once. Therefore, it is split 7 ways to perform less demanding tasks quickly without max power. Sounds impressive, to say the least!

For more on Nvidia, GPU’s and supercomputers, stay tuned!

Leave a Reply

Your email address will not be published.
Required fields are marked *

error: Content is protected !!