Originally posted on Data Center POST.

U.S. data centers are moving quickly from 100G/200G to 400G and 800G, while preparing for 1.6T. The main driver is AI: training and inference fabrics generate huge east-west (server-to-server) traffic, and any network bottleneck leaves expensive GPUs/accelerators underutilized. Cisco notes that modern AI workloads are “data-intensive” and generate “massive east-west traffic within data centers”.

This step-change is now viable because switching and NIC silicon can deliver much higher bandwidth density. Broadcom’s Tomahawk 5-class devices, for example, support up to 128×400GbE or 64×800GbE in a single chip, enabling higher-radix leaf/spine designs with fewer boxes and links. Optics are improving cost- and power-efficiency as well; a Cisco Live optics session highlights a representative comparison of one 400G module at ~12W versus four 100G modules at ~17W for the same aggregate bandwidth.

In parallel, multi-site “metro cloud” growth is increasing demand for faster data center interconnect (DCI). Coherent pluggables and emerging standards such as OIF 800ZR are making routed IP-over-DWDM architectures more practical for metro DCI.

To continue reading, please click here.