Fig. 4: Network performance of the proposed optical switching and control system. | Nature Communications

Fig. 4: Network performance of the proposed optical switching and control system.

From: Nanosecond optical switching and control system for data center networks

Fig. 4

a Data recovery time as the function of the inter-packet gap length. Case-1: without clock distribution mechanism and without pulse transitions insertion; Case-2: With clock distribution mechanism and without pulse transitions insertion; Case-3: With clock distribution mechanism and with pulse transitions insertion. b 7 day packet loss rate measured on the optical link. The number of sent data packets to specific ToRs is recorded every day and the received packets at each specific ToR are counted as well. E.g., the total number of data packets delivered from ToR2, ToR3 and ToR4 to ToR1 is CT, and the count of correct packets after CRC checking at the destined ToR1 is CR. Thus, the packet loss rate on the optical link to ToR1 is calculated as (CT-CR)/CT. c The detailed packet loss rate on the 4th day. d Throughput and server-to-server latency for large-scale network scale. The label control mechanism, OFC protocol, and clock distribution are deployed in the OMNeT++ model, completely following the technical design. In this model, 40 servers are grouped in each rack and 6 WDM transceivers are deployed at each ToR, where each transceiver equips with 25 KB electrical buffer. e ON/OFF power ratio under different driving currents. f BER curve and eye diagram of the deployed SOA switch. The Xilinx IBERT IP Core is deployed in each FPGA-based ToR to measure the BER performance of the SOA switch. SOA semiconductor optical amplifier, B2B back-to-back.

Back to article page