Monday, March 16, 11:00 a.m. PT 🔗

Live Updates From the GTC Keynote 

Welcome to GTC 2026

A capacity crowd at the SAP Center, all waiting for the same thing.

The keynote opened with a video framing the token as the basic unit of modern AI — the building block behind systems used for scientific discovery, virtual worlds and machines operating in the physical world.

NVIDIA founder and CEO Jensen Huang then took the stage to raucous applause from the crowd.

He opened by thanking the pregame show hosts and highlighting the partners participating in the show, along with the 450+ sponsors, 1,000 sessions and 2,000 speakers.

“This conference is going to cover every single layer of the five-layer cake of artificial intelligence,” Huang said.

He marked the 20th anniversary of CUDA — describing it as the “flywheel” driving accelerated computing and the platform that supports “every single phase of the AI lifecycle.”

Huang turned to GeForce, describing NVIDIA as “the house that GeForce made,” the platform that brought CUDA to the world. He walked through GeForce’s history, tying it all back to AI, and introduced DLSS 5, launching a video showing how 3D-guided neural rendering enables real-time, photoreal 4K performance on local hardware. Learn more in the press release.

Next, Huang walked through the field of data processing and how it’s being accelerated for the era of AI. He detailed work with IBM, Dell, Google Cloud, AWS, Microsoft Azure, Oracle and CoreWeave, to serve their customers.

Huang then surveyed the accelerated computing ecosystem — automotive, financial services, healthcare, industrial, media, quantum, retail, robotics and telecom.

“All of these different vectors of AI have platforms that NVIDIA provides,” Huang said, highlighting NVIDIA’s broad range of CUDA-X libraries, which he described as the “crown jewels” of the company. 

Huang highlighted the rise of “AI natives” — brand-new companies, some well-known, such as OpenAI and Anthropic, and some still emerging. “This last year, it just skyrocketed,” Huang said, citing $150 billion of investment into venture startups and walking through the history of the technologies that sparked the latest technology boom. 

As a result of this boom, the computing demand for NVIDIA GPUs is “off the charts,” he said. “I believe computing demand has increased by 1 million times over the last few years.”

As a result, Huang said he now sees at least $1 trillion in revenue from 2025 through 2027.

Vera Rubin and Beyond — A Generational Leap in Computing 

Huang noted that NVIDIA’s token cost is the best in the world, thanks to extreme codesign, reveling in one analyst description of NVIDIA as “the inference king.” “This is the incredible power of extreme codesign,” Huang said, referencing a process where software and silicon are designed in tandem. 

The next step: NVIDIA Vera Rubin, a new full-stack computing platform comprising seven chips, five rack-scale systems and one supercomputer for agentic AI. The platform includes the new NVIDIA Vera CPU and BlueField-4 STX storage architecture.

“When we think Vera Rubin, we think the entire system, vertically integrated, complete with software, extended end to end, optimized as one giant system,” Huang said, walking the audience through the insides of new systems built on these technologies.

Looking beyond Vera Rubin, NVIDIA’s next major architecture is Feynman. 

It will include a new CPU, NVIDIA Rosa, named for Rosalind Franklin, Huang said, whose X‑ray crystallography revealed the structure of DNA and reshaped modern biology. As Franklin exposed the hidden architecture of life, Rosa is built to move data, tools and tokens efficiently across the full stack of agentic AI infrastructure.

Rosa anchors a new platform that pairs LP40, NVIDIA’s next‑generation LPU, with NVIDIA BlueField‑5 and CX10, connected through NVIDIA Kyber for both copper and co‑packaged optics scale‑up, and NVIDIA Spectrum‑class optical scale‑out, Huang said. Together, the Feynman generation advances every pillar of the AI factory: compute, memory, storage, networking and security.

And to help accelerate the scale-out of new AI capacity, Huang announced the NVIDIA Vera Rubin DSX AI Factory reference design and the NVIDIA Omniverse DSX Blueprint. DSX Air, part of the broader DSX platform, lets companies simulate AI factories in software before building them in the physical world.

Finally, Huang announced NVIDIA is going to space. Its new Vera Rubin architecture honors the astronomer whose work revealed dark matter, and future systems like NVIDIA Space-1 Vera Rubin are being designed to bring AI data centers into orbit, extending accelerated computing from Earth to space.

NVIDIA NemoClaw for OpenClaw, Nemotron Coalition

Huang spotlighted OpenClaw, an open source project from developer Peter Steinberger that he called “the most popular open source project in the history of humanity.” 

“OpenClaw has open sourced the operating system of agentic computers … Now, OpenClaw has made it possible for us to create personal agents,” Huang said. 

With a single command, developers can pull down OpenClaw, stand up an AI agent and begin extending it with tools and context. NVIDIA is announcing support for OpenClaw across its platform, making it easier for developers to safely build, deploy and accelerate AI agents on NVIDIA‑powered infrastructure. 

Every single company in the world today has to have an OpenClaw strategy, Huang said. 

To ensure this technology can be deployed securely inside enterprises, Huang introduced the NVIDIA OpenShell runtime and the NVIDIA NemoClaw stack — combining policy enforcement, network guardrails and privacy routing. These technologies can serve as “the policy engine of all the SaaS companies in the world,” Huang said.

In addition, NVIDIA is expanding its open model ecosystem with a new Nemotron Coalition, rallying partners around six frontier model families: NVIDIA Nemotron (language and reasoning), NVIDIA Cosmos (world and vision), NVIDIA Isaac GR00T (general‑purpose robotics), NVIDIA Alpaymayo (autonomous driving), NVIDIA BioNeMo (biology and chemistry) and NVIDIA Earth‑2 (weather and climate).

Physical AI 

NVIDIA is extending AI from digital agents into physical AI that can navigate the real world. 

Huang said NVIDIA’s robotaxi‑ready platform is drawing new automaker partners, including BYD, Hyundai, Nissan and Geely.

He also highlighted a partnership with Uber to deploy these vehicles into its ride‑hailing network. 

Beyond automakers, NVIDIA is working with industrial software giants and robotics leaders such as ABB, Universal Robots and KUKA to integrate its physical AI models and simulation tools, enabling deployment of smarter robots on manufacturing lines, and with telecom providers like T‑Mobile as base stations evolve into edge AI platforms.

That’s a Wrap

Huang capped off his keynote with a surprise visit from Olaf, the snowman from Disney’s Frozen, who appeared to walk straight off a digital screen and onto the stage. 

“Ladies and gentlemen, Olaf,” Huang said, as the character waddled out, driven by NVIDIA’s physical AI stack, the Newton physics engine and NVIDIA Omniverse-powered simulation. 

“Olaf, how are you? I know because I gave you your computer — Jetson,” Huang joked. 

When Olaf asked what that was, Huang replied, “Well, it’s in your tummy … and you learned how to walk inside Omniverse.” 

The demo underscored Huang’s broader point: everything on display — from humanoid robots to animated characters — had been simulated, not pre-rendered. 

Huang closed by recapping the themes — inference, the AI factory, the OpenClaw, physical AI and robotics — then handed the stage to a musical ensemble: singing robots, a digital Jensen avatar and an animated lobster, performing a campfire song. 

“All right, have a great GTC,” Huang said, exiting stage left as Olaf lingered, hamming it up for the crowd as he sank back beneath the stage through a trap door. 

Read all NVIDIA news from GTC on the online press kit.

Build an agent of your own at NVIDIA’s build-a-claw event in the GTC Park, March 16-19 — 1-5 p.m. on Monday, and 8 a.m.-5 p.m. on Tuesday through Thursday — to customize and deploy a proactive, always-on AI assistant.


Fonte do Artigo