Cisco ACI or VMware NSX?
Cisco ACI or VMware NSX — competitive or complimentary?
It really depends on your requirements:
Neither one will accomplish all objectives, but you can potentially implement both to best achieve your desired outcome.
Let’s begin with a quick analysis of both:
Quick Analysis of Cisco ACI
ACI provides the hardware and software required for running the SDN solution. It is positioned as an SDN fabric where you can attach all your physical and virtual L4-L7 services, Compute, and management solutions. It is clear that the main focus of the solution is to enable centralised routing and switching and stateless firewalling across the fabric. It uses VxLAN as an overlay and IS-IS and MP-BGP for the underlay routing
Quick Analysis of VMware NSX-T
NSX-T is solely a software-based SDN solution that can run on a traditional server hypervisor platform such as ESXi or KVM. It offers many of the L4-L7 services, such as distributed firewall, routing, switching, IDS and load-balancer, VPN, NAT, DHCP, DNS relay, etc. VMware NSX-T uses GENEVE as the overlay and any physical hardware from any vendors for underlay connectivity.
Both architectures have been through many upgrades but what hasn’t changed is that Cisco relies on the Nexus 9000 as the hardware that ACI is dependent on, where NSX-T is independent of the switching architecture that it runs on top of.
Software-Defined networking is being considered for flexible, agile, next-generation Data Centres and with this comes the need to provide for functionality, such as, automation, multiple Data Centres, virtualisation, segmentation and many more functions.
So, with neither SDN solutions ticking every box of requirements, what is the value of running two SDN architectures to meet your needs.
The Strengths of the ACI provides the physical fabric:
With a new Data Centre project or refresh, a new switching fabric that is simple to manage and automated, will be required. Automation is vital and Cisco ACI makes this simple to manage.
ACI, when using its open API, makes the deployment and day to day operations considerably easier, but there is still a lot going on under the covers.
There is the same number of spine and leaf switches for a CLOS fabric regardless of it being ACI or traditional VXLAN. The delta in cost is typically the APIC’s (management controllers), which are relatively minor once the sales promotions are taken into consideration.
Another benefit of utilising ACI for the underlay outside of the basic fabric, is the ability to connect non-virtualised workloads. NSX-T only supports ESXi and KVM hypervisors, specific Linux bare-metal workloads, Windows bare-metal workloads and specific container platforms. That leaves plenty of workloads where ACI can be used to help provide not only Data Centre connectivity but implement application policies as well.
This is one of the scenarios where we can utilise both environments. You may want to implement a centralised application policy end-to-end, but some bare-metal workloads are not supported on NSX-T. ACI would help fill that gap while still providing a simple Data Centre fabric.
How and why is NSX utilised with ACI?
Regarding a single overlay: ACI handles the networking while NSX provides distributed firewall functionality. Until recently, the single overlay design was the most frequently implemented architecture of NSX on ACI.
Utilising a double overlay: ACI would provide an overlay and security for workloads not being managed by NSX-T. NSX-T runs on top of the ACI fabric but has its own overlay for networking while using ACI's overlay as the transport. NSX-T then peers with ACI using BGP or static routes by connecting the NSX Edge to an ACI border leaf. This peering allows the enterprise network to access NSX-T resources through the ACI fabric.
The double overlay architecture is becoming more popular as environments such as Pivotal Container Service (PKS), VMware Cloud Foundation and VMware Tanzu utilise NSX-T to provide their networking for cloud services. WhiteSpider can discuss these design elements in detail during an NSX on ACI design workshop.
Operational concerns
Even though running both technologies together is possible, it doesn't come without challenges. One of the biggest challenges is ownership. For example, who is going to own NSX. Each environment is different due to technical capabilities, budget, and teamwork.
Another challenge is cost. The hardware cost to do ACI or not on a Cisco Data Centre fabric is similar when doing a refresh or greenfield implementation. The additional cost then is the NSX component. If the requirements demand NSX functionality, such as micro-segmentation, IDS, etc, is the incremental cost worth that functionality? If so, we will have to take into consideration additional training and tools for the teams. Each environment has its own set of tools for troubleshooting, visibility and analytics.
At a glance use cases of both technologies:
Use case of running Cisco ACI
- Centralised Management of bare-metal and hypervisor server’s connectivity
- Centralised connectivity management of physical firewall, load-balancer, and proxy, etc
- Traffic switching between the physical and virtual infrastructure
- Centralise traffic analysis and security rule creation
- Programmable Physical Fabric
- Logical segregation of physical and virtual connectivity – Multi-tenancy
- Zero-touch provisioning and maintenance of physical switches
Use case of running VMware NSX
- Reduce the passive infrastructure blueprint
- Private cloud automation across DMZ and DC
- Granular micro-segmentation of workloads across DMZ and DC workloads
- Single management to automate L4-L7 infrastructure and provisioning across the DMZ and DC
- Seamless & faster infrastructure provisioning
- Visibility and Troubleshooting
How can WhiteSpider help?
If you are considering, either one or both of these technologies, we will work with you and guide you through which architecture is best for you. Understanding your objectives, we will assist in determining which solution scenario is best for your environment.