AI gives every company an opportunity to turn its processes into a data flywheel, NVIDIA founder and CEO Jensen Huang told thousands of attendees Monday at the Snowflake Data Cloud Summit.
Companies need to “take all the most important processes they do, capture them in a data flywheel and turn that into the company’s AI to drive that flywheel even further,” said Huang, joining from Taipei a virtual fireside chat with Snowflake’s CEO Sridhar Ramaswamy in San Francisco.
The two executives described how the combination of the Snowflake AI Data Cloud and NVIDIA AI will simplify and accelerate enterprise AI.
“You want to jump on this train as fast as you can, don’t let it fly by because you can use it to transform your business or go into new businesses,” said Huang, the day after he gave a keynote kicking off COMPUTEX in Taiwan.
Snowflake Users Can Tap Into NVIDIA AI Enterprise
For example, businesses will be able to deploy Snowflake Arctic, an enterprise-focused large language model (LLM), in seconds using NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform.
Arctic was trained on NVIDIA H100 Tensor Core GPUs and is available on the NVIDIA API catalog, fully supported by NVIDIA TensorRT-LLM, software that accelerates generative AI inference.
The two companies also will integrate Snowflake Cortex AI and NVIDIA NeMo Retriever, so businesses can link their AI-powered applications to information sources, ensuring highly accurate results with retrieval-augmented generation (RAG).
Ramaswamy gave examples of generative AI applications developed with the NVIDIA NeMo framework and Snowpark Container Services that will be available on Snowflake Marketplace for use by thousands of Snowflake’s customers.
“NVIDIA’s industry-leading accelerated computing is game changing for our customers and our own research team that used it to create the state-of-the-art Artic model for our customers,” said Ramaswamy.
To learn more, watch NVIDIA GTC on-demand sessions presented by Snowflake on how to build chatbots with a RAG architecture and how to leverage LLMs for life sciences.
Discover more from Stay Updated
Subscribe to get the latest posts sent to your email.