OFC 2025 industry reflections - Part 3
Monday, May 12, 2025 at 12:30PM
Roy Rubenstein in Antonio Tartaglia, Bill Gartner, Cisco Systems, Ericsson, Lumentum, Matt Sysak, Mixx Technologies, OFC 2025, Ramya Barna

Gazettabyte is asking industry figures for their thoughts after attending the OFC show in San Francisco. In the penultimate part, the contributions are from Cisco's Bill Gartner, Lumentum's Matt Sysak, Ramya Barna of Mixx Technologies, and Ericsson's Antonio Tartaglia.

San Francisco skyline. Source: Shutterstock

Bill Gartner, Senior Vice President and General Manager, Optical Systems and Optics, Cisco  

There was certainly much buzz around co-packaged optics at Nvidia’s GTC event, and that carried over into OFC.

The prevailing thinking seems to be that large-scale co-packaged optics deployment is years away. While co-packaged optics has many benefits, there are challenges that need to be overcome before that happens.

Existing solutions, such as linear pluggable optics (LPO), continue to be discussed as interim solutions that could achieve close to the power savings of co-packaged optics and preserve a multi-vendor pluggable market. That development in the industry will be an intermediate solution before co-packaged optics is required.

By all accounts, IP-over-DWDM, or Routed Optical Networking as Cisco calls it, is now mainstream, enabling network operators to take advantage of the cost, space, and power savings in almost every part of the network.

Through the Openzr+ and Openroadm models, coherent pluggable usage has expanded beyond data centre interconnect (DCI) and metro applications. The subject was covered in many presentations and announcements, including several trials by Arelion and Internet2 of the new 800-gigabit ZR+ and 400-gigabit ultra-long-haul coherent pluggable. ZR and ZR+ pluggable optics now account for more than half of the coherent ports industry-wide.

I also saw some coherent-lite demonstrations, and while the ecosystem is expanding, it appears this will be a corner case for the near future.

Lastly, power reduction was another strong theme, which is where co-packaged optics, LPO, and linear retimed optics (LRO) originated. As optics, switches, routers, and GPU (graphics processor unit) servers become faster and denser, data centres cannot support the insatiable need for more power. Network operators and equipment manufacturers are seeking alternative ways to lower power, such as liquid cooling and liquid immersion.

What did I learn at OFC? Pradeep Sindhu, Technical Fellow and Corporate Vice President of Silicon with Microsoft, gave one of the plenary talks. He believes we should stop racing to higher lane speeds because it will compromise scale. He believes 200 gigabits per second (Gbps) is a technology sweet spot.

As for show surprises, the investor presence was markedly larger than usual, a positive for the industry. With almost 17,000 people attending OFC this year and AI driving incremental bandwidth that optics will serve, you could feel the excitement on the show floor.

We’re looking forward to seeing what technologies will prevail in 2026.

 

Matt Sysak, CTO, Cloud and Networking Platform at Lumentum.  

The industry spotlight at OFC was on next-generation data centre interconnects and growing AI-driven bandwidth demands.

Several suppliers demonstrated 400 gigabit-per-lane optics, with Lumentum showcasing both 450 gigabit-per-second (Gbps) indium phosphide Mach-Zehnder and 448 gigabit-per-lane externally modulated laser (EML) technologies.

In long-haul networking, the continued expansion of data centre traffic across longer fibre spans drives demand for high-capacity solutions such as 800G ZR C+L band transceivers. I learned at the show that the focus has shifted from incremental upgrades to building fundamentally new network layers capable of supporting AI workloads at scale. Conversations around innovations such as 400-gigabit DFB Mach-Zehnder lasers and advancements in optical circuit switches made it clear that the industry is driving innovation across every network layer.  

One of the biggest surprises was the surge in optical circuit switch players. The core technology has expanded beyond traditional micro-electro-mechanical systems (MEMS) to include liquid crystal and silicon photonics approaches. There is clearly growing demand for high-radix, low-power optical interconnects to address rising data centre power consumption.

With our proven expertise in MEMS and the ability to scale port counts with low insertion loss, we believe Lumentum’s optical circuit switch offers clear advantages over competing technologies.

 

Ramya Barna, Head of Marketing and Key Partnerships, Mixx Technologies.  

It was evident at OFC 2025 that the industry is entering a new phase, not just of optical adoption but also of architectural introspection.

Co-packaged optics was the dominant theme on the show floor, with vendors aligning around tighter electrical-optical integration at the switch level. However, discussions with hyperscalers were more layered and revealing.

Meta spoke about the need for full-stack co-optimisation: treating photonics not just as a peripheral, but as part of the compute fabric.

AWS emphasised co-designing power and photonics—optics and electricity as first-class citizens in infrastructure planning.

Microsoft, meanwhile, challenged the community on reliability and manufacturability at the DRAM scale, demanding optics that can be trusted, such as memory.

These inputs reinforce a core truth: the AI bottleneck is not compute capacity, but bandwidth, latency, and power at scale.

The current wave of co-packaged optics implementations is a step forward, but it remains constrained by legacy system boundaries where retimers, linear interfaces, and electrical serdes bottlenecks still dominate.  

At Mixx, we’ve long viewed this not as an integration problem but an architectural one. AI infrastructure requires a redesign in which photonics is not bolted on but directly integrated into compute—native optical paths between ASICs. That is our thesis with optical input-output (I/O).

OFC 2025 reinforced that the industry is converging on the same realisation: optical interfaces must move deeper into the package, closer to the logic. We're aligned on timelines, and most importantly, on the problem definition.

Looking forward to OFC 2026, where system-level transformation takes over.

 

Antonio Tartaglia, System Manager and Expert in Photonics at Radio and Transport Engineering, Transport Systems at Ericsson.

The effort invested in traditional telecom connectivity is decreasing, and more attention is being paid to solutions that have the potential to unlock new revenue streams for communications service providers (CSP).

A good example is distributed fibre sensing, which involves reusing deployed telecom-grade fibre plants. Optical connectivity for satellite communications was also a trending topic, with much excitement about low-Earth orbit (LEO) satellites as a complement to radio access networks (RAN).

OFC 2025 highlighted that the telecom industry must continue to reuse wisely and adapt optical technologies developed for datacom, which is acting as the innovation powerhouse for the whole industry.

The only way to reuse the solutions developed for data centres is, well … to build a data centre. Still, the same basic technologies can often be reused and adapted to telecom use cases with reasonable development effort.

I believe industry-wide initiatives (MSAs, alliances, consortia) pursuing this objective will become even more critical for telecom. 

Speaking of the segment close to my heart - optical connectivity for RAN – the adaptation of datacom technologies works fine for short reach (<2km) optical interconnects, where we reuse one optical lane of data centres’ multi-lane optical interfaces.

After OFC 2025, I believe the relentless optimisation of coherent technology towards shorter and shorter reaches, and the concurrent rise of packet fronthaul in RAN, could pave the way for a new breed of ‘coherent-lite’ optical solutions for radio transport networks.

It was awe-inspiring to hear talks on scaling AI compute clusters, which are now aiming at the ‘psychological’ threshold of AI models with 100 trillion parameters—the estimated compute power of a human brain.

This journey will require clusters of millions of interconnected GPUs resulting in 2 megawatt data centres, with electric power availability limiting the choice of locations. An emerging research area to reduce power is integrated optics “optical co-processors” for GPUs, performing energy-efficient vector-to-matrix multiplications in the optical domain. Although technology readiness is low, start-ups are already working on this challenge.

The most obvious solution to the power conundrum seems to be dividing these GPU mega-clusters across smaller sites. This approach will increase the demand on data centre interconnects (DCI), requiring them to function as long-haul RDMA (remote direct memory access) interconnects.

These interconnects will need ultra-low latency and precise time synchronisation, which could be very attractive for future RAN transport needs.

Article originally appeared on Gazettabyte (https://www.gazettabyte.com/).
See website for complete article licensing information.