NSX-T Manager Datacenter

NSX-T Manager Datacenter

Is the new solution for the network virtualization and the new version with a lot of changes from the NSX-V manager. It’s now merged the manager role and the controller role in one appliance. So’ the NSX manager is placed in the management plan and the control plane. So, most of the management and control planes happened in the same appliance.

NSX manager includes

  • policy.
  • manager .
  • central control plane (CCP).

NSX Policy

Policy is the centralized location for configuring the roles and network and security across the environments. It allows the users to get the configuration in the NSX portal. It’s impeded in the NSX manager appliance and linked one to one with the nsx manager.

NSX Manager

  • It receives and validates the nsx policy configurations.
  • Also publishes the configuration to the CCP (central control plane)
  • Install and prepare the data plane components in the transport nodes.
  • Retrieves the statistical data from data plane components.

The components of NSX Manager node interact with each other:

  • Reverse proxy is the entry point to NSX Manager. The policy role handles all the networking and security policies and enforces them in the manager role
  • Proton is the core component of the NSX Manager node. it’s various functionalities such as logical switching, routing , distributed firewalls and others.

both NSX policy and proton persist data in CorfuDB.

Central Control Plane (CCP).

This is the role which is responsible for the control plane that provide the control plane functionality for the (switching, routing, firewall). It computes all the runtime states based on the configuration from the management plane. And pushing the stateless configurations to forwarding engines. Also disseminating topology information reported by the data plane elements.

The control plane divided into two components:

  1. Central Control Plane CCP that already created in the NSX manager, the management plan pushes the configuration down to the CCP through NSX-RPC (remote procedure call), and using the same protocol (NSX-RPC) push the information to each LCP.
  2. Local Control plane LCP which included in the Transport node (the compute node like ESXI, KVM,..)

Each transport node will send 3 tables to the manager controller plane CCP (general central controller) (Mac table, arp table, tep table ) and CCP combine all of it in one table and send it back in the local control plane. all the data saved in the CorfuDB.

NSX-T Manager High Availability

After merging the major part of the control plane in the nsx manager become very critical. So, we need High availability incase HW failure or any other issue. And as recommended to have it in 3 Nodes. And it will require a load balancer to automate the failover between them. and should configure VIP. One of the managers will be elected as a leader. All the data saved in the CorfuDB and will replicate it to the other nodes. the latency of the nsx manager replications between the nodes is 10 msec. the VIB not to load balance but to automate the traffic redirection.