|
|
1 |
(% class="row" %) |
|
|
2 |
((( |
|
|
3 |
(% class="col-xs-12 col-sm-8 test-report-content" %) |
|
|
4 |
((( |
|
|
5 |
---- |
|
|
6 |
|
|
|
7 |
=== Editor's Note === |
|
|
8 |
|
|
|
9 |
{{id name="editors_note"/}}The 2026 edition of our multi-vendor interoperability test has been a massive undertaking of the **12 leading vendors participating this year:** Arista Networks, Calnex Solutions, Ciena, Cisco Systems, Ericsson, HPE, Keysight, Microchip, Nokia, Raisecom, Ribbon, and ZTE, with the EANTC engineering team. Together, we have spent more than 1,600 person days (equivalent to seven person years) designing and implementing test cases with 56 device types, creating more than 1,300 result datasets. More than three tons of equipment were moved across the globe to facilitate this undertaking and are shown live in Paris. |
|
|
10 |
|
|
|
11 |
**Why invest all this effort?** Because the transport network innovations that we evaluate – Segment Routing, SRv6, EVPN, Orchestration and Network Automation, and Time Synchronization – are the foundation of today’s and tomorrow’s Internet backbones, including mobile networks and data centers. Fortunately, thanks to the outstanding continued work by the IETF and other SDOs, these technologies are fully standardized and open. To ensure that multi-vendor networks remain a viable option for network operators, we validate the latest innovations with manufacturers in advance before they hit service provider proof of concept labs and enterprise acceptance procedures. Our joint testing **increases trust, accelerates innovation, and enables diverse supplier sourcing**. Digital sovereignty is built on these pillars: Enabling global alternatives, minimizing dependencies on single vendors or regions. In our tests, vendors from the Americas, Europe, and Asia are collaborating to implement open standards. |
|
|
12 |
|
|
|
13 |
**What are the specific innovations this year?** Outside our lab, most operators are still in the process of deploying Segment Routing in the near future. In the test lab, we have declared interoperability success for many basic test areas in Segment Routing and EVPN years ago. All previous reports are available from our website. Nowadays, our protocol tests focus on closing the remaining functional, reliability, scalability, and network automation gaps: Advanced multicast EVPNs; multi-homing in SRv6; telemetry collections and digital twins; large-scale partial/full timing architectures, etc. These topics are crucial proof points to enable Segment Routing migration for many custom network configurations out there. |
|
|
14 |
By far the most complex integration effort this year was spent on **use case testing**: We crafted two large-scale, realistic use case scenarios: |
|
|
15 |
|
|
|
16 |
* 5G xHaul with Segment Routing Interworking |
|
|
17 |
* EVPN service automation and assurance |
|
|
18 |
|
|
|
19 |
These use case scenarios involved almost all participating vendors, creating realistic architectures that can serve as **blueprints**, guiding operators towards vendor-independent network design. Being very demanding and time-consuming, the use case tests rewarded us with strong results. They are the foundation for the live demos shown at the Upperside Congress in Paris this year, are well documented, and will be expanded on next year. |
|
|
20 |
Of course, we must not forget about AI these days. Our tests included both relevant sub-topics this year: |
|
|
21 |
|
|
|
22 |
**AI-enabled networking** is governed by standardized provisioning (via PCE, Yang models, and BGP-SR — check), extensive telemetry data (via BGP-LS and TWAMP—check), and automated optimization checks (via digital twins—check). The vendors participating in this test area are on a steady path; that said, it is still a long way towards multi-vendor Autonomous Networks. Today, partial automation of specific service aspects in SR and EPVN is possible; it is important to require standardized methods in RfPs in detail. |
|
|
23 |
|
|
|
24 |
**Networking for AI workloads**is a topic we wanted to cover more intensively, but it was too early. The next generation of data center transport has been defined by the Ultra Ethernet Consortium (UEC). The implementations naturally take time to get ready because they require major hardware innovations. We only covered a small aspect this time and plan to expand the UEC integration next year. |
|
|
25 |
This 16-page report is only the very short version of all results. Please check out the QR codes with many more test results. We hope our joint effort is beneficial for any WAN, mobile x-haul, and data center network architects! |
|
|
26 |
If you have any detailed questions, suggestions for next year’s test coverage, or would like to tap our brains for an individual network design, please contact us. |
|
|
27 |
|
|
|
28 |
=== EANTC's Mission === |
|
|
29 |
|
|
|
30 |
EANTC is a leading independent test lab dedicated to validating the interoperability, performance, robustness, and security of network solutions across platforms and applications. With 35 years of expertise, EANTC supports innovation by strengthening the reliability and operational readiness of vendor solutions. Through transparent, reproducible assessments, we help the industry ensure compliance with standards, reduce operational risk, and enable stable, trustworthy network deployments. |
|
|
31 |
|
|
|
32 |
=== Working Process === |
|
|
33 |
|
|
|
34 |
Preparations for the EANTC Transport & Cloud Networks Interop Test 2026 began in September 2025 with a technical call involving all vendors interested in participating. This initial discussion covered the overall event structure, followed by dedicated technical calls for each test area, led by the EANTC team alongside vendor experts. |
|
|
35 |
During these sessions, potential test cases were identified and refined, with vendors contributing new ideas and draft cases from their teams, while the focus remained on exploring innovative testing approaches and ensuring alignment with the latest industry standards. |
|
|
36 |
The Hot-Staging took place in Berlin over three weeks. During the first week, engineers arrived to install devices, with the latest hardware shipped from around the world. From January 26 to February 6, more than 85 engineers on-site participated in intensive testing. Detailed discussions and rapid problem-solving during this period resulted in over 1,839 validated outcomes and the preparation of the live demos for the Upperside World Congress in Paris. |
|
|
37 |
|
|
|
38 |
=== Interoperability Test Results === |
|
|
39 |
|
|
|
40 |
EANTC engineers closely supported and validated every test combination, following strict procedures and predefined steps. The resulting report presents only results that were consistently logged, submitted, and verified by EANTC specialists, ensuring accuracy and preventing misinterpretations or false positives. |
|
|
41 |
While our focus is on multi-vendor testing, single-vendor cases are generally excluded. An exception is made if a previously validated multi-vendor test fails during hot staging, leaving only one vendor with a working, standards-compliant implementation. In such situations, EANTC acknowledges their effort and includes the result in the report. |
|
|
42 |
This test report highlights successful test combinations, clearly identifying the participating vendors and devices. “Tested” in this context refers specifically to multi-vendor interoperability. Combinations that did not pass are not shown in the diagrams but are mentioned anonymously to provide insight into the industry's current state. Maintaining confidentiality is essential to encourage vendors to present their latest, often still in beta, solutions, creating a safe environment for testing, learning, and advancing network interoperability. |
|
|
43 |
The test results will be presented live at the Upperside World Congress (previously the “MPLS World Congress”) in Paris, March 24–26. For 22 years, EANTC has showcased its interoperability testing at Upperside conferences, highlighting the latest advances in network technologies. |
|
|
44 |
|
|
|
45 |
(% id="prev-next-links" %) |
|
|
46 |
| |[[Next ~>>>doc:Main.EANTC Transport & Cloud Networks Interop Test Report 2026.Overall Physical Test Topology.WebHome]] |
|
|
47 |
))) |
|
|
48 |
|
|
|
49 |
(% class="col-xs-12 col-sm-4 test-report-sidebar" %) |
|
|
50 |
((( |
|
|
51 |
{{box}} |
|
|
52 |
{{include reference="Sidebar Nav"/}} |
|
|
53 |
{{/box}} |
|
|
54 |
))) |
|
|
55 |
))) |