Why this GTC is different
Every March, Jensen Huang walks onto a stage in San Jose wearing his trademark leather jacket and tells the world what's coming next in AI. It's become something of a tech industry ritual — part product launch, part state-of-the-union address for the entire artificial intelligence economy. But GTC 2026 feels different. Not because the technology is incrementally better. It's different because the stakes are fundamentally higher. NVIDIA is sitting at an uncomfortable crossroads. On one side, the company remains the undisputed king of AI hardware. On the other, its stock is down 11% from late 2025 peaks, and Wall Street has moved past the "AI is going to change everything" phase into the considerably less romantic "prove it" phase. [2] Investors aren't asking whether AI matters anymore. They're asking whether the hundreds of billions being poured into data centers, GPU clusters, and training runs are going to generate returns that justify the price tags. And NVIDIA, as the company selling the shovels in this particular gold rush, has to answer that question convincingly. Monday's keynote is where that answer starts.
The Vera Rubin platform: Ten times cheaper AI
The biggest hardware announcement is the formal launch of the Vera Rubin platform, NVIDIA's next-generation architecture after Blackwell. Huang revealed at CES in January that Vera Rubin had already entered full-scale mass production, which means this isn't a paper launch — it's real silicon shipping to real customers. [1][3] The numbers are significant. The flagship VR200 NVL72 delivers 3.3 times the overall inference performance of the Blackwell Ultra GB300 NVL72. It packs 336 billion transistors. Its HBM4 memory hits bandwidth exceeding 3 terabytes per second at over 11 gigabits per second — roughly 30% higher than comparable AMD offerings. [1]





