We’re making sure computers keep getting better. With light.
Founded by MIT researchers in 2017. $850M raised. $4.4B valuation. Building the photonic technologies that will power the next era of AI infrastructure.
Mission
Every major AI lab is building toward the same thing: a single coherent machine made of hundreds of thousands of processors, training models too large for any one chip to hold. The compute exists. The problem is connecting it.
Electrical interconnects hit a wall. Bandwidth is constrained by the physical perimeter of the chip—the shoreline. As clusters grow, the number of connections scales quadratically, but the edges of the package don’t. GPUs spend more time waiting for data than computing. Trillions of dollars in silicon, underutilized.
Light doesn’t have this problem. Photons travel without resistive loss, cross paths without interference, and carry multiple signals on a single fiber. The physics are fundamentally better.
Lightmatter builds the photonic infrastructure for AI at scale—interconnects, lasers, and eventually compute itself. We’re not optimizing the old architecture. We’re replacing the constraint.
Locations
Mountain View (HQ)
Boston
Hsinchu
Toronto
Leadership
Scaling with Light
Lightmatter’s team combines expertise in silicon photonics, high-speed SerDes design, advanced packaging, AI systems architecture, and hyperscale data center infrastructure. We’re hiring across all functions.
ADVISORS
Why Lightmatter
Architectural Leadership
Edgeless I/O is not incremental—it’s a fundamental shift. While others optimize perimeter-constrained designs, we’ve eliminated the constraint entirely.
First-mover advantage in category-defining technology. Decade-scale roadmap for continued bandwidth scaling.
114 Tbps today. 1+ Pbps tomorrow.
Production Ready
Not research. Not vaporware. Multiple generations of production-ready technology.
Partnerships with TSMC, GlobalFoundries, and Tower ensure high-volume capability. Reference platforms in customer’s hands today.
Solving Real Problems
GPU utilization at extremely low due to bandwidth bottlenecks. Hundreds of millions in stranded compute capacity. Models limited by communication overhead, not compute.
Passage eliminates the constraint. 4x faster training for the same power. 100,000+ GPU clusters. Frontier AI becomes economically viable. Accelerated AI model development.
Team & Expertise
Deep expertise spanning silicon photonics, high-speed analog design, advanced packaging, AI systems, and hyperscale infrastructure.
Decades of combined experience from leading semiconductor, optics, and data center companies.
We understand both the physics and the product.
Join us
We’re hiring engineers, researchers, and leaders across all functions to build the future of AI infrastructure.