With most of my recent projects customers are moving towards the 10G converged adapters to achieve the benefits of consolidation of network and storage especially on Blade Server Architecture.
I am writing this post to provide you guidelines on how you can divide a 10GB CNA card on your ESXi server to meet all the network and storage requirements. Before that, let’s have a look at what is the 10Gig CNA and what are the brands available in the market available for this technology.
A CNA card a.k.a “Converged Network Adapter” is an I/O card on a X86 server, that combines the functionality of a host bus adapter (HBA) with a network interface controller (NIC). In other words it “converges” access to, respectively, a storage area network and a general-purpose computer network. As simple as it sounds, it makes things simple in the datacenter as well. Instead of running down those cables from each NIC card/FC HBA or iSCSI cards, you can just use a single cable to do all these tasks for you. This is because the CNA card is converged and can carry all the traffic on a single physical interface.
There are a number of manufacturers of such cards who either manufacture these cards themselves or just re-brand them with their logo and custom firmware.. Here are a few examples:-
– IBM etc..
So as a customer you have a number of choices and it is important that you choose what fits your existing infrastructure or the new hardware if it is a Greenfield site.
Let’s say you bought a CNA which gives you 4 virtual ports per physical port, let’s see how we can divide the bandwidth of this physical port amongst the virtual port to both Storage and Network Communication.
On the physical card the bandwidth can be divided like how it is shown in the figure below:-
Here, the CNA card has 2 physical port each with 10 GB bandwidth. I have further divided this card into 3 network cards and 1 FC HBA per physical port. Hence, I will have a total of 6 Network cards and 2 FC HBA per CAN card. If you like the concept of No Single Point of Failure (SPOF) and can afford another card, and then you would end up having 12 NIC Cards and 4 FC HBA Ports per Blade server.
Isn’t that cool?? A Blade server which so many NICs. Well this can be used on Rack servers as well as it will also reduce the back-end cabling!
Now a last look at how I would use these NICs and FC Ports to configure the networking for the ESXi Server. The diagram below shows how I would configure the networking on my ESXi server to get the best possible configuration out of the available hardware resources.
The Diagram above clearly shows how we have divided this bandwidth amongst all the required port groups. If you have 2 such cards, you will have high resiliency in your design and the number of ports would double up providing better performance as well.
Remember you are free to toggle around the bandwidth for the Virtual NICs and Virtual FC HBA’s basis how much you want for your port groups. The bandwidths which I have mentioned above are a guideline and can be used as they fit in most the bills.