AI, Hairball, and the Future of Optical Networking on Tap at OFC 2024

As in prior years, my team and I decided to craft a 2024 OFC summary. While 2023 saw significant new product announcements, I would argue that 2024 is a year of proof points, or demonstrable execution on prior product announcements from the vendor ecosystem. As an example, we demonstrated 27% increased fiber capacity with Super C enhancements to the GX OLS and the 1.2 Tb/s per wave ICE7 optical engine – both product ideas brought forward in 2023.
The Optica Executive Forum kicked off on Monday. With increased participation from web-scale companies and a refreshed agenda that included evolutions in data center architectures and AI, the energy level and attendance at this year’s Executive Forum were way up. Generative AI, and its impact both inside and outside the data center, was a leading topic throughout the week.
I also moderated this year’s State of the Industry panel. After brief presentations and a spirited Q&A session, I asked each of our analysts to provide one word or phrase that best described OFC 2024. Here are their responses: Ruben Roy, Stifel, “rejuvenated;” Jimmy Yu, Dell’Oro, “back to normal;” Andrew Schmitt, Cignal AI, “200G per lane” (in reference to the move to 200 Gb/s SERDES lanes that underpin 800G and 1.6T pluggable optics); and Mike Genovese, Rosenblatt, “buzzy,” indicating that this year’s event had lots of buzz and excitement not seen by our industry in many years. Finally, Meta Marshall, Morgan Stanley, went with “fascinating” – meaning that there are lots of fascinating debates about future directions on 800G vs. 1.6T optics, pluggables vs. embedded solutions, and data center optical connectivity that includes linear-drive pluggable optics (LPO), linear receive optics (LRO), and co-packaged optics (CPO). On Wednesday evening, we led the sponsorship for food, drinks, and a concert featuring the 1980s rock tribute band Hairball on the USS Midway aircraft carrier and museum. Over 3,600 OFC attendees joined us – smiling and singing along with so many familiar songs. Now, let’s check out the other perspectives.
Operationalizing IPoDWDM
By Jon Baldry
I spent a lot of the show supporting our 400G ICE-X intelligent coherent pluggables demo. ICE-X pluggables aren’t your run-of-the-mill optics. They are effectively a virtual transponder or even a complete optical networking system virtualized in a pluggable form factor. The demonstration highlighted not only advanced optical networking functionality, such as automated optical power control to support network turn-up and network fiber fault issues, but also the host-independent management approach that can really help operationalize the deployment of the IPoDWDM architecture in real-world networks. This advanced functionality and management architecture resonated with the network operators attending the demo, especially the communications service providers (CSPs) that have separate IP and optical teams. Many understood the operational challenges within their organizations and saw how these capabilities could help them bring the economic advantages of IPoDWDM into their network, while also maintaining an organization’s existing operational structure and responsibilities.

In Control
By Teresa Monteiro
In the last couple of years, we saw web-scalers deploy IPoDWDM extensively, equipping 400G ZR pluggables in routers for data center interconnect (DCI). With the emergence of higher-performance modules, including OpenZR+ and Infinera’s ICE-X, CSPs want to benefit too from the power, space, and cost savings of IPoDWDM. They want to extend IPoDWDM to metro and long-haul networks and operate any coherent pluggable, equipped in any router, over any OLS. But many wonder what the best control architecture for IPoDWDM networks is – in fact, this was a hot topic discussed in several OFC 2024 panels and presentations, and at our booth. Our understanding, after conversations with many CSPs, is that there is no single approach that suits all: the best pluggable management architecture depends on the network scenarios, the established operational practices, and the overall network automation environment. But that is OK: our Open Wave Manager demonstration at OFC showed that a vendor-agnostic optical domain controller can simultaneously support alternative pluggable management approaches, alone or cooperating with an IP controller, with support from the router software or in a host-independent manner. This flexibility enables CSPs to start adopting IPoDWDM today, maintaining end-to-end optical network control and visibility, using the pluggable management architecture that suits them now and evolving it as their experience and use cases mature.
800G and Beyond
By Fady Masoud
For me, OFC 2024 was about what’s next in coherent pluggables as their application scope expands in optical networking, from DCI to metro aggregation to long-haul. My particular interest was 800G coherent pluggables as they deliver greater capacity than 400G pluggables, in addition to enabling lower bit rates such as 200G, 400G, and 600G to be transported over unprecedented distances using a pluggable form factor. Infinera had a sample of our ICE-X 800G technology on display, both physically and virtually. We used Meta’s latest augmented reality headset – the Oculus Quest 3 – to help our customers and partners explore the key building blocks of our next-generation coherent pluggables in a physical environment that is not easily viewable. The multi-haul aspect of our 800G coherent pluggable and interoperable probabilistic constellation shaping (PCS) were highlighted. Such capabilities will redefine optical networking economics (cost, space, power) in numerous applications.

Bring On the Terabit Era
By Christian Uremovic
Service providers I talk with are constantly looking to cost-effectively get more capacity out of their fiber. So, for OFC 2024, we created a live 400-km network using Vascade® EX2500, Corning’s latest fiber technology. Our sincere thanks to Corning for their collaboration. Among other Infinera GX features, we demonstrated live operation of the Super C-band with the 32D ROADM-on-a-blade, the world’s smallest Raman Super C amplifier, and our newest ICE7 optical engine. The ICE7 optical engine delivers 2 x 1.2T per wavelength and is tunable across the 6.1 THz Super C-band. It’s all packed into a single CHM7 sled, offering 100 GbE, 400 GbE, and 800 GbE services, while providing about 30% lower power consumption and a threefold enhancement in wavelength capacity-reach compared to the previous market-leading ICE6 engine. These technologies are enabling a meaningful fiber capacity increase of over 25%, and together with Super L-band, enabling 100 Tb/s per fiber pair. Now that’s capacity for good. Next to the Super C operation, we showed a 1.2T OTN switch-on-a-blade with 4 x QSFP-DD coherent ICE-X 400G ZR+ interfaces and FIPS-certified 600G technology in operation.
We also showcased a live private line emulation (PLE) demo. In collaboration with Intel, we transformed an ODU4 signal into a transparent packet stream and transported the transparent service over a converged packet core with Juniper routers. This might be the first ODU4 PLE demonstration in the industry.

Streaming into the Future with Predictive Analytics
By Kurt Raaflaub
One thing I did not anticipate going into OFC 2024 was the excitement and active engagement we received from network operators with our streaming telemetry and predictive analytics demo. During the live multi-vendor demonstration, we launched a rich network health dashboard showing real-time and historical networking statistics with up to two-second granularity. Prior network and performance data collection methods like SNMP are slow, incomplete, and challenging to operationalize – especially across multi-vendor networks. This was different. With streaming telemetry, we can capture and store performance metrics for every network element and specific subcomponents like fans, CPUs, memory, and coherent wavelength performance data and more. Utilizing open-source tools like Telegraf and Grafana, network operators can easily visualize the performance of their networks in a more unified way – resulting in enhanced network reliability and increased customer satisfaction. By combining machine learning with a rich and continuously updated network data set, the positive network outcomes are nearly limitless. And that is even more exciting.
Subsea Going for More
By Geoff Bennett
While OFC is not specifically focused on subsea technologies, there was a lot of discussion about how we can continue to meet the demand for capacity across submarine network links – especially as we approach the practical limits of individual fiber pair capacity. Demand for subsea capacity is set to grow at a CAGR of about 35%, and the industry-wide focus for submarine cable evolution to meet this demand is based on space-division multiplexing (SDM), a technique that emphasizes maximum total cable capacity rather than individual fiber pair capacity. To enhance SDM economics, we will see increasing use of high-baud-rate transponders such as Infinera’s ICE7 to close subsea cables at higher wavelength data rates, which means fewer transponders using less rack space and power for a given service load.
But in the next year or so we may see the first practical use of multi-core fibers, and the question we all need answered is whether this type of fiber can be manufactured at economic yields for submarine cable use. It’s a topic that I hope to explore in a future Infinera blog, so stay tuned!

Reducing Power Consumption for AI and the Planet
By Paul Momtahan
While attending the technical conference and show floor presentations, one common theme emerged for me – how to minimize power consumption. In addition to the usual concerns about reducing carbon emissions, I saw extra focus and concern this year on power consumption related to artificial intelligence. With generative AI models growing by 100x in two years and GPU compute power growing by 3.3x and interconnect bandwidth by 1.4x over the same time, AI clusters are growing and consuming more and more power. Therefore, the need for power efficiency with optical interconnects for AI clusters was a common concern. At the same time as spectral efficiency on coherent transmission approaches the Shannon limit, watts per bit is becoming a key performance metric alongside cost per bit.
Power consumption is also a key challenge for next-generation 1.6T coherent pluggables, driving optical engine design choices including baud rates, modulation formats, DSP CMOS process node sizes, and photonic materials, with indium phosphide and thin-film lithium niobate leading candidates. The ITU-T presented their work on standards for energy efficiency. Coming full circle, while there was plenty of concern about AI power consumption, there was also optimism that the application of AI to our networks could in turn reduce optical network power consumption. As an example, we could selectively put ports, line modules, or whole portions of the network into hibernation mode when anticipated demand is predicted to be lower.
See You in San Francisco!
That’s a wrap on OFC 2024. The quality and quantity of papers, panels, presentations, tutorials, and demonstrations make this event so special to our industry. As I made my way back home to suburban Chicago, I felt a combination of exhaustion and pride at being a part of such a talented and impactful industry – one that reliably connects people and applications to the cloud and each other every single day and in some of the harshest conditions in the world.
Rest up. Planning for OFC 2025 in San Francisco is already underway.
Safe travels everyone.