/

NVIDIA DGX Cloud: The online supercomputer for AI training

1 min read
NVIDIA DGX Cloud: The online supercomputer for AI training
(Image Credit: NVIDIA)

NVIDIA, a company that has long been associated with graphics cards, has emerged as a key player in the artificial intelligence (AI) space. At this year’s GPU Technology Conference (GTC), CEO Jensen Huang made it clear that NVIDIA’s AI push is leading somewhere, pointing to the hype around OpenAI’s ChatGPT, Microsoft’s revamped Bing, and other competitors.

One of NVIDIA’s key offerings is its DGX supercomputers, which are known for their ability to power AI applications. However, these systems have always been out of reach for many companies, with the DGX A100 selling for $200,000 in 2020. To address this issue, NVIDIA has launched the DGX Cloud, an online platform that allows users to access the power of its supercomputers starting at $36,999 a month for a single node.

The DGX Cloud is powered by eight of NVIDIA’s H100 or A100 systems with 60GB of VRAM each, providing a total of 640GB of memory across the node. The platform also features high-performance storage and low-latency fabric that connects the systems together. Existing DGX customers may find the cloud solution more appealing as it offers a more flexible way to scale up their AI needs without having to purchase another physical box.

In addition to the DGX Cloud, NVIDIA has also launched AI Foundations, an easier way for companies to develop their own Large Language Models (LLMs) and generative AI. Large companies like Adobe, Getty Images, and Shutterstock are already using them to build their own LLMs. The platform ties directly into the DGX Cloud with NeMo, a service specifically focused on language, as well as NVIDIA Picasso, an image, video, and 3D service.

NVIDIA also unveiled four new inference platforms at GTC, including NVIDIA L4, which offers “120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency,” according to the company. L4 can be used for video streaming, encoding and decoding, as well as generating AI video. There’s also NVIDIA L40, which is devoted to 2D and 3D image generation, as well as NVIDIA H100 NVL, an LLM solution with 94GB of memory and an accelerated Transformer Engine. Finally, there’s NVIDIA Grace Hopper for Recommendation Models, an inference platform built for recommendations, graph neural networks, and vector databases.

For those interested in seeing NVIDIA L4 in action, it will be available to preview on Google Cloud G2 machines. The generative AI video tool Descript and the art app WOMBO are already using L4 over Google Cloud.

NVIDIA’s AI offerings represent a significant leap forward for the company, positioning it to take advantage of the AI wave. With the DGX Cloud, AI Foundations, and the new inference platforms, NVIDIA is making AI more accessible to a broader range of companies, including those that may not have the resources to invest in their own supercomputers.