
Nvidia Bets $4 Billion on Photonics to Keep AI Fed
Nvidia’s $4B push into photonics with Lumentum and Coherent shows how lasers and optical links are becoming the real choke point for scaling AI — not just GPUs.
Nvidia just made one of its clearest bets yet on where the next AI bottleneck lies — and it isn’t more GPUs. The company is committing $4 billion to photonics suppliers Lumentum and Coherent, doubling down on lasers and optical interconnects as the limiting factor for how far, and how cheaply, AI factories can scale.
On March 2, 2026, Nvidia announced a multiyear strategic partnership with Lumentum that includes a $2 billion investment and multibillion-dollar purchase commitments for advanced laser components, along with priority access to future manufacturing capacity in a new U.S. fab. Nvidia framed the deal explicitly as an optics play for "gigawatt-scale AI factories," where traditional copper links and switch architectures are buckling under bandwidth and energy demands. 【turn0search0】
That move effectively pairs a $2 billion Lumentum stake with Nvidia’s existing collaboration with Coherent, another photonics heavyweight that has been working with the chipmaker on silicon photonics networking switches and co-packaged optics (CPO) to connect what Nvidia calls "AI factories" — clusters of tens or hundreds of thousands of GPUs. Coherent highlighted that ecosystem role at Nvidia’s GTC in 2025, positioning its 800G and 1.6T optical modules and circuit switches as core ingredients in next-generation AI data centers. 【turn0search1】【turn0search7】
Why photonics, and why now?
Massive AI models have pushed data center networks into a new regime. The cost of training frontier systems has been rising at roughly 2.4x per year since 2016, with cluster interconnect now accounting for close to 10–15% of total training outlay for state-of-the-art runs, according to a 2024 cost analysis of GPT-4, Gemini and other models. 【turn0academia13】 As model sizes and context windows grow, the energy spent just moving bits between GPUs begins to rival the energy used for computation itself.
Optical links are the escape valve. High-speed lasers and electro-absorption modulated laser (EML) transmitters already sit at the heart of 400G and 800G data center transceivers, and industry roadmaps point to 1.6T modules deployed at scale later this decade. 【turn0search2】 Where copper interconnects hit length and power limits, photonics offers longer reach, higher bandwidth and significantly lower energy per bit — precisely the profile hyperscalers need as racks push toward hundreds of kilowatts of power draw.
Nvidia’s latest deal gives it supply-chain leverage over that optical layer. The Lumentum partnership includes not only equity but future capacity rights tied to a new U.S.-based fab, explicitly framed as a response to soaring AI data center power and bandwidth needs. 【turn0search0】 Coherent, for its part, is deepening work on co-packaged optics — moving optical engines from the front panel of switches directly next to the switch ASIC, cutting electrical trace losses and enabling denser fabrics. 【turn0search1】
The new bottleneck: memory, optics and power
These photonics bets land amid mounting signs that the AI race is shifting from chip design to infrastructure economics. A global memory crunch tied to AI buildouts has already pushed server DRAM contract prices up by as much as 50% and left hyperscalers receiving barely 70% of their orders, as manufacturers divert capacity into high-bandwidth memory (HBM) for accelerators. 【turn0search6】 Researchers are now explicitly treating HBM reliability and cost as tunable system parameters, exploring architectures that trade off error protection to lower bit cost for inference clusters. 【turn0academia14】
In that context, Nvidia’s move to lock in multi-billion-dollar optical supply looks less like a side bet and more like vertical consolidation. The company is already a bellwether for suppliers across lasers, networking gear and cooling; a strong data center quarter in late 2025 was followed by outsized stock jumps for Coherent, Lumentum and other infrastructure vendors, as investors priced in years of elevated AI capex. 【turn0search8】
The strategic question is who can still afford to play. Academic analyses suggest the largest training runs could exceed $1 billion by 2027 if current trends continue, limiting frontier-scale development to a handful of players with the balance sheets — and now, crucially, the reserved optics and memory capacity — to support them. 【turn0academia13】 That concentration has knock-on effects for competition policy, open research and national AI strategies.
A hardware race beneath the model hype
For all the attention on model architectures and agents, Nvidia’s $4 billion photonics push underscores a quieter reality: the next wave of AI progress may be gated less by clever software and more by how fast light can be routed through glass and silicon.
If optical transceivers, CPO-based switches and integrated lasers can deliver the promised reductions in energy per bit, they could flatten — or at least bend — the curve of AI training costs, making trillion-parameter-scale systems and persistent, real‑time inference clusters more economically viable. If they cannot, the AI boom runs headlong into physical and financial limits on power, cooling, and component supply.
Either way, Nvidia’s latest move signals where it believes the leverage now lies. The company that owns not just the GPUs, but also the lasers and links that keep them talking, could end up defining who gets to build the largest brains — and at what price.
Tags