NVIDIA Just Spent $4 Billion on Light. Here's Why That's the Most Important AI Investment of 2026.
NVIDIA invested $4 billion in Coherent Corp. and Lumentum Holdings to secure the optical interconnect supply chain — because the next AI bottleneck isn't chips, it's the light that connects them.
Abstract light and technology visualization representing optical interconnects and photonics
Key Points
•NVIDIA announced $2 billion investments in both Coherent Corp. and Lumentum Holdings on March 2, alongside multibillion-dollar purchase commitments, totaling roughly $4 billion aimed at scaling silicon photonics and optical interconnect manufacturing for AI data centers. Jensen Huang called it enabling "gigawatt-scale AI factories." This isn't a chip deal — it's an infrastructure bottleneck play. [1]
•The core problem is physical: copper interconnects max out at around 800 Gbps, and modern AI clusters need to move data faster than copper can carry it. Every server-to-switch link in a large AI training cluster now requires an optical connection. The transition from copper to photonics isn't a nice-to-have — it's mandatory for the next generation of AI infrastructure. [1][2]
•The real bottleneck is indium phosphide (InP), a compound semiconductor needed to fabricate the high-speed lasers inside optical transceivers. Unlike silicon, InP production is limited to a handful of specialty manufacturers with low throughput. Current demand already exceeds supply by roughly 2x. NVIDIA is applying the same playbook it used to break CoWoS packaging bottlenecks in 2023–2024: invest directly in suppliers, guarantee purchase volume, and pull capacity forward. [1]
The AI industry's next crisis isn't about chips. It's about wires.
On March 2, NVIDIA announced something that got roughly one-tenth the attention of its earnings reports but may matter ten times more: parallel $2 billion investments in Coherent Corp. and Lumentum Holdings, two optical component companies most people outside the data center industry have never heard of. [1]
The deals come with multibillion-dollar purchase commitments on top of the equity stakes. Jensen Huang framed the partnerships in characteristically grand terms — "pioneering next-generation silicon photonics" and building "gigawatt-scale AI factories" — but the underlying message was more practical and more urgent. [1]
NVIDIA has identified the next bottleneck in AI scaling, and it isn't GPUs. It isn't memory. It isn't even power. It's the physical connections between all that hardware — the links that move data from chip to chip, server to server, rack to rack. Those connections are hitting a wall, and NVIDIA just spent $4 billion to break through it.
•NVIDIA chose two partners deliberately. Coherent specializes in co-packaged optics (CPO) integration and fiber-to-chip connectors; Lumentum focuses on high-power laser chips used in external laser sources. Together they cover distinct layers of the optical stack. The dual-source strategy mirrors NVIDIA's approach to HBM memory — cultivating SK Hynix, Samsung, and Micron simultaneously to avoid single-supplier risk. [1][3]
Here's the basic physics. Data inside an AI training cluster moves through interconnects — the wiring that connects GPUs to each other, to memory, and to the network. For decades, those interconnects have been copper. Copper is cheap, well-understood, and it works fine at moderate speeds.
But "moderate" stopped being relevant about two years ago.
Fiber optic cables in a data center — the backbone of modern AI interconnects.
Modern AI training clusters — the kind running GPT-scale models and their successors — require interconnect speeds of 800 Gbps and above. At those frequencies, copper hits fundamental physical limits. Signal degradation, heat generation, and power consumption all spike. The practical result: you can't build the next generation of AI clusters with copper interconnects. Not can't-afford-to. Physically cannot. [1]
This isn't a gradual transition. NVIDIA's own networking hardware — Quantum-X and Spectrum-X switches — already requires optical connections for nearly every server-to-switch link in large configurations. As NVIDIA's NVLink-connected GPU domain expands from 72 GPUs in the current Blackwell generation to larger configurations in the upcoming Rubin architecture, the demand for optical bandwidth increases exponentially. [1]
The shift from copper to photonics — using light instead of electrical signals to move data — is the mandatory next step. And it's already behind schedule.
The indium phosphide problem
If photonics is the solution, why isn't everyone already using it?
Because the supply chain doesn't exist yet. Not at the scale AI demands.
The critical component in an optical transceiver is a tiny laser that converts electrical signals to light. These lasers are fabricated from indium phosphide (InP), a compound semiconductor with properties that make it ideal for high-speed optical communication. The problem is that InP fabrication is nothing like silicon chip manufacturing.
Silicon wafers benefit from sixty years of scaling, a massive global foundry ecosystem, and relentless cost reduction. InP epitaxial growth — the process of building up crystalline layers of indium phosphide — is performed by a small number of specialty manufacturers with inherently lower throughput and lower yields. There is no TSMC equivalent for InP. [1]
The result: current transceiver demand already exceeds InP supply by roughly a factor of two. Every major AI infrastructure buildout — from hyperscaler data centers to sovereign AI clusters — is competing for the same constrained pool of optical components.
If that sounds familiar, it should. In 2023 and 2024, NVIDIA's H100 GPU production was constrained not by chip design or demand, but by CoWoS — TSMC's advanced packaging technology that assembles the GPU die alongside high-bandwidth memory. NVIDIA solved that bottleneck by pre-paying for capacity, investing in packaging R&D, and cultivating multiple suppliers. [1]
The $4 billion photonics investment is the same playbook, applied to the next bottleneck in the chain.
Why two companies, not one
NVIDIA's decision to split its investment equally between Coherent and Lumentum wasn't indecisive — it was architecturally deliberate. [1]
The two companies occupy complementary positions in the optical stack. Coherent brings deep expertise in co-packaged optics (CPO) integration — the technology that embeds optical engines directly into switch and GPU packages rather than relying on separate pluggable transceivers. They also specialize in fiber-to-chip connectors and broader optical packaging. [1]
Lumentum, meanwhile, focuses on the laser side: high-power continuous-wave laser chips used as external laser sources for CPO systems. If Coherent builds the optical engine, Lumentum builds the light source that powers it. [1]
Together, they cover distinct layers of the photonics supply chain — from the InP laser source through to the packaged optical engine that sits inside the data center hardware. By investing in both, NVIDIA ensures it has depth across the entire stack rather than depending on a single vendor for everything.
This mirrors NVIDIA's approach to high-bandwidth memory (HBM), where the company simultaneously cultivated relationships with SK Hynix, Samsung, and Micron. Single-source dependency at NVIDIA's scale is an existential supply chain risk. The company has learned that lesson repeatedly and isn't interested in learning it again. [1][3]
Scale-up versus scale-out: two different problems
One detail that's easy to miss in the announcement language is that NVIDIA is actually addressing two distinct optical challenges. [1]
Scale-up optics refers to chip-to-chip interconnects within a single NVLink domain — the ultra-fast links that let dozens of GPUs communicate as if they were one massive processor. As NVIDIA pushes from 72-GPU configurations in Blackwell to larger clusters in Rubin, these internal links need to move data at speeds where copper simply fails. Co-packaged optics can reduce power consumption by up to 3.5x and improve resiliency by 10x compared to traditional electrical links — gains that become essential when a single AI rack draws up to 600 kilowatts and networking alone can consume 10% of that power envelope. [1]
Scale-out optics refers to the rack-to-rack fabric — the network that connects thousands of GPU servers into a unified training cluster. This is a volume problem more than a speed problem. A cluster with tens of thousands of GPUs requires hundreds of thousands of individual optical links. Each link depends on the same constrained InP laser supply. At this scale, even a small shortfall in optical component production can delay an entire data center buildout.
NVIDIA needs to solve both, and the Coherent/Lumentum investments are structured to address both layers.
What this means for the AI infrastructure race
The photonics investment tells us something important about where the AI industry actually is in its buildout cycle.
The popular narrative focuses on chips — who has the fastest GPU, whose custom silicon can beat NVIDIA at inference, how many H100 equivalents are shipping per quarter. But the companies actually building AI infrastructure at scale have moved past the chip conversation. They're worried about everything around the chip: power delivery, cooling, packaging, and increasingly, interconnects. [2][3]
Morgan Stanley recently reinstated NVIDIA as its top semiconductor pick with a $260 price target, citing the company's roughly 85% share of global AI processor revenue and strong enterprise AI infrastructure demand through 2026. [3] But the photonics investment suggests NVIDIA itself sees the competitive landscape differently than Wall Street does.
NVIDIA isn't spending $4 billion on optics because it's worried about AMD or custom chips from hyperscalers. It's spending $4 billion because without solving the optical bottleneck, its own GPU roadmap stalls. You can design the fastest chip in the world, but if you can't connect ten thousand of them at the speeds AI training requires, the chip's performance is irrelevant.
Meanwhile, NVIDIA is also preparing a new inference-focused chip — reportedly based on technology acquired from Groq — that could reshape the competitive dynamics with Broadcom and Google in the inference market. [2] If that chip materializes, it'll need the same photonic interconnects to operate at scale. Every product in NVIDIA's pipeline depends on the optical supply chain these investments are meant to secure.
The bigger picture
There's a pattern to how AI infrastructure evolves: each generation's breakthrough becomes the next generation's bottleneck.
GPUs were the breakthrough that enabled modern AI training. Then GPU supply became the bottleneck (2022–2023). Then advanced packaging became the bottleneck (2023–2024). Then high-bandwidth memory became the bottleneck (2024–2025). Now it's optical interconnects.
Each time, NVIDIA has responded the same way: identify the constraint early, invest directly in the supply chain, lock up capacity with multibillion-dollar commitments, and create a dual- or multi-source strategy to ensure continuity. The $4 billion photonics bet follows this playbook precisely.
The companies that will define the next era of AI aren't just the ones designing the best chips. They're the ones that control the full stack — from silicon to packaging to memory to the light that carries data between it all. NVIDIA is making sure that when the rest of the industry realizes photonics is the bottleneck, it has already bought the solution.