This post circles back to an article which I wrote in the month of January 2013, about Dividing Bandwidth of a 10 GB CNA Adapter for ESXi Networking and Storage!! In that article, I gave you an overview of how you can divide the available bandwidth on a 10 GB CNA card at the hardware level to create multiple vmnics and vmhbas for network and storage traffic respectively.
I got a lot of comments and feedback on that article, wherein some of the experts spoke about doing the same with VMware vSphere Network I/O Control (NIOC). In a recent engagement, we did face a constraint under which the 10 GB adapter could not be segregated at the hardware level.
This was the opportunity for me to use the 10 GB network with segregation using the vSphere Network I/O Control. I wanted to share the learnings & the experience with my readers as well.
Quick Recap
A CNA card a.k.a "Converged Network Adapter" is an I/O card on a X86 server, that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words it "converges" access to, respectively, a storage area network and a general-purpose computer network. As simple as it sounds, it makes things simple in the datacenter as well. Instead of running down those cables from each NIC card/FC HBA or iSCSI cards, you can just use a single cable to do all these tasks for you. This is because the CNA card is converged and can carry all the traffic on a single physical interface.
Since we do not want to segregate the bandwidth on the physical card, we will just do a simple segregation on Network & Storage. This will be done in case we chose our storage medium to be Fiber Channel and not an IP based storage.
If it is IP storage such as NAS or iSCSI, we will divide the entire card into 1 vmnic per physical port in the CNA and then create portgroups for VM Traffic, Management Traffic and IP Storage. However, in my case I had FC storage in place, hence the bandwidth on the physical card was divided as shown in the figure below:-
Here, the CNA card has 2 physical ports, each with 10 GB bandwidth. I have further divided this card into 1 network card and 1 FC HBA per physical port. Hence, I will have a total of 2 Network cards and 2 FC HBA per CNA card. If you like the concept of No Single Point of Failure (SPOF) and can afford another card, and then you would end up having 4 NIC Cards and 4 FC HBA Ports per Blade server.
Now a look at how I would use these NICs to configure the networking for the ESXi Server. The diagram below shows how I would configure the networking on my ESXi server to get the best possible configuration out of the available hardware resources. Since we only have 2 Network cards now, we will hook up all the port groups on to it and use Network IO Control to divide the bandwidth at the vSphere layer. Here is how things would be connected logically:-
**NOTE - You need dvSwitch for NIOC configuration, hence you need to ensure that you are on Enterprise Plus Licensing for this to work.
The fun is not over yet. Once you have setup everything on the dvSwitch, things would look like my lab dvSwitch. Look at the screenshot below:-
The fun is not over yet. Once you have setup everything on the dvSwitch, things would look like my lab dvSwitch. Look at the screenshot below:-
Now comes the part where you enable Network I/O Control (NIOC) and divide the network resources within the default or defined port group types. Here are the steps to do this.
1- On you vCenter Server click on Home -> Networking -> dvSwitch.
2- Click on the Resources Tab as shown below and Enable NIOC
3- Once it is enabled, click on each resource pool listed for different port group traffic and assign shares.
4- Limit the bandwidth, if you need to for any of the port group.
Here is the screenshot for what I did with my network switch:-
Remember you are free to toggle around the bandwidth for the resource pools on the basis of how much you want for your port groups. The bandwidths which I have mentioned above are a guideline and can be used as they fit in most the bills.
Benefits over CNA level segregation
There are a few benefits of using this method and I will quickly list them down here:-
- You can change the Bandwidths on the Fly as per the requirement.
- Do not need any down times to make changes.
- The single point to configure or change things is the dvSwitch so dependency on each host is ruled out completely.
- Very easy to manage and control
- vSphere Admin has no dependencies and can bump up the vMotion bandwidth if he needs to move VMs across quickly for some reason.
And I can keep on writing...... So if you have Enterprise Plus Licensing, then I would recommend this way of doing things for sure.
Hope this helps you design the network and storage with the 10GB Adapter with using enterprise class features such as VMware Network IO Control.
How did you get 160Gbps with 2 x 6GB links on yur last screenshot?
ReplyDelete@KiOnf - Thats 160 GB in total from 8 cards with 2 ports of 10 GB each. The Emulex card after seperation of fabric and Network still shows up as 10 GB
ReplyDelete