Friday, June 2, 2017
Cisco Nexus 1000V
Cisco Nexus 1000V
Deploying Cisco�s Nexus 1000v has gotten easier and easier as software updates have been released for it. I remember when the 1000v first came out and the difficulties people had (including myself!). One thing for sure is Cisco listens to their customers and partners, and works hard to correct pains as they can as quickly as they can.
I recently was working with a customer who wanted a detailed step-by-step guide on how to deploy the 1000v without the use of all the various Cisco docs. The idea was a one-stop shop with screenshots outlining the install process.
I had not seen this before out and about so figured I�d share it with the public. Before someone goes there, the below is not taken from their environment and all screenshots are from within WWT�s own lab environment.
There are a couple ways to perform the install based on your comfort level. For this post, I�ll be utilizing the GUI method as it�s the simplest and quickest way to get up and running. This wizard based install will guide you through performing base configurations and getting the 1000v talking to vCenter. Once complete, you�ll be able to immediately start configuring port-profiles for VM use!
Let�s get started�
First, you need to understand the 1000v is installed by importing an OVF template. To begin, head on over to Cisco and download the 1000v � Note you�ll need a CCO login to download it. If you have not purchased licenses for the 1000v, good news! There is a fully functioning 60 day trial so you can get it up and running to play with in your environment to see how it functions and works. For this post, I�ll be using version 4.0(4)SV1(3b).
Once you have the file downloaded, let�s unzip and place it somewhere we can access it. Now in, vCenter we�ll begin the process.
Note, the below is going to get very step-by-step as I had mentioned earlier, so if you�re comfortable deploying OVF template VM�s you can go ahead and skip ahead to the 1000v GUI install
We�ll begin by deploying an OVF Template. This is done within vCenter.
Browse to the folder where you uncompressed the downloaded file. You�re looking for the following file. -> ~Nexus1000v.4.0.4.SV1.3bNexus1000v.4.0.4.SV1.3bVSMInstall exus-1000v.4.0.4.SV1.3b.ova.
Once found, let�s click Next to continue.
The next screen should confirm that you�ll be installing the Nexus 1000v along with provide some version information. Click Next to continue.
Standard ULA agreement. If you agree, click Accept then Next to continue.
Remember, the 1000v is a virtual machine, so we�ll have to give it a name. As you can see, I�ve appended an A. The 1000v can be configured in two ways (HA and Standalone). We�ll be installing a second 1000v later to create an HA pair so appending the A makes it easy to denote in vCenter which is which. Here is also where you can place the virtual machine into a specific folder if you so choose. Once the name has been entered click Next to continue.
For the rest of this post I will refer to the first 1000v as �Node-A� and the second as �Node-B�
Since we�re utilizing the GUI for this 1000v deployment, you�ll want to ensure the Nexus 1000v Installer is chosen for the Configuration screen. Click Next to continue.
Choose what datastore you�d like the 1000v to reside on and then click Next to continue.
The next screen we�ll supply some basic configuration items. There will be more later don�t worry. Each field here is required so ensure all items are filled out before clicking Next.
Confirm all information is correct and click Finish to begin the install.
Pretty easy huh? Remember, since we�ll be configuring the 1000v in HA mode, we�ll need a second VM. Simply walk through the above process again with the exception of a couple steps.
I�ve appended a B on the second 1000v.
Because this will be the second set of the HA pair, we won�t need to do much configuration of it (i.e. IP address, Subnet, Gateway, etc) so just choose Manually Configure Nexus 1000v and click Next to continue.
The rest of the process will be similar to the steps followed for when deploying the Node-A. Finish going through the wizard and once complete we�ll continue.
Alright, so now what you should see in vCenter is two new virtual machines for the 1000v. Let�s take a quick look at the properties of Node-A.
As you can see the OVF template deployment preset and built out the VM as required. Nothing more should be required here. We�ll want to of course make sure that the attached NICs are connected to a vSwitch port-group that will allow connectivity to the supplied IP address when building Node-A.
Let�s start Node-A and jump into the virtual machine console.
Once the virtual machine has completely booted up, you should see a login prompt. What I would do at this point is simply ping that IP address from your local machine to ensure you have connectivity. If you do not, go back and make sure your vnic to port-group mappings are correct. Once confirmed, let�s open a browser and continue down the configuration process.
Open your browser and point to the IP address of Node-A. Click on the link to launch the installer application. A java based configuration GUI should appear. It is this wizard where we�ll supply further configuration information and link the 1000v to vCenter.
Supply the password specified during the initial deployment and click Next to continue.
Here is where vCenter specific information must be supplied. Enter in the IP address and Administrator account information to connect to vCenter. It should be noted that this account is only utilized for the initial link and not required after the wizard is complete.
The 1000v has the capability to manage multiple clusters within a data center. If you want to have the capability to manage all ESX hosts in a given data center, you should choose the highest level. Click Next to continue.
Choose Node-A since that is what we�re configuring and now there are some items to discuss.
If you remember when we looked at the 1000v virtual machine properties, we saw three NICs. Each NIC is utilized to carry traffic for a very specific function.
- Control � This NIC is utilized to communicate with the VEM�s which reside on each ESX host. VEM�s should be thought of as line cards in a catalyst switch. They are dumb and hold no configuration capabilities.
- Management � Should be obvious but this is where we connect to configure and manage the 1000v.
- Packet � This is used for specific protocols such as CDP, LACP, and IGMP.
For this example, I have chosen the Default install method. If this were to be in production, it would be my general recommendation to separate these functions out thus having a separate VLAN for Management and then one dedicated to Control and Packet. Some will say Control and Packet should reside on separate VLANs however I have not seen any issues or problems keeping them on the same VLAN.
Here we�re configuring some additional information. I�ve called out a few that we�ll want to pay attention to.
- Switch Name � Since we�re configuring two switches here to be utilized in an HA pair I�ve taken the names we used when deploying the VM�s and dropped the appended A and B.
- Domain ID � You can basically use anything you want here. Domain ID�s are used to denote a logical grouping of devices which the HA pair will manage. If later you were to deploy a new pair of 1000v�s, a different domain ID would be utilized. Make note of what domain ID you use here as we�ll be utilizing it later on.
- SVS Datacenter Name � Make sure the correct vCenter Datacenter name is displayed here.
Review all the settings and click Next to continue.
The configuration process should now commence and if you�re still consoled into Node-A, you�ll see it reboot a couple times as configurations are applied.
When the process completes, you should get this screen. I�m not going to let this process join any hosts to the 1000v but you could do that if you wanted to here. I�ve chosen No and clicked Finish to close out the wizard.
At this point, we now have one node of the HA pair deployed. If we were not installing an HA pair, configuration of uplinks and port-groups could begin so ESX hosts can be joined to the 1000v.
Let�s take a look at vCenter and see what changed. Going to the Networking screen we can see the new dVS created.
We�re almost done with the initial configuration. The next couple things will go quickly. We now need to tell Node-A he�s going to be in an HA pair since the default setup is to create a standalone switch. Let�s console Node-A and set it up.
Note � It�s important that Node-B remain powered off during this process. If you find Node-A rebooting on you without notice, check to ensure Node-B is powered off.
First let�s look at the redundancy status. This will show you what mode (Standalone or HA) the switch is running in. In the console issue the following command:
>WWTLab_1000v#show system redundancy status
We can see the current Redundancy role is standalone.Let�s change that to HA by issuing the following. We�ll save the config and then reload the switch to make the changes take affect.
>WWTLab_1000v#system redundancy role primary
>WWTLab_1000v#copy run start
>WWTLab_1000v#reload
After the switch has rebooted, log back in and check the redundancy status again.
>WWTLab_1000v#show system redundancy status
We should now show the Redundancy role as being Primary. Great, now we have the first node in the HA pair.We can�t have an HA pair without a secondary Node so let�s configure Node-B real quick. Remember when we initially deployed Node-B we didn�t do any IP configuration or anything else right? Let�s start Node-B and jump in the console to configure it.
Enter the same password you used for Node-A here.
This will be the secondary Node so type secondary.
Confirm that you�re wanting to change the role by typing Yes.
Remember that Domain ID thing? Type in the same Domain ID as you used when configuring Node-A.
The switch will now go through an automated configuration process. It is during this process where the cluster or HA pair is formed. Node-B should reboot automatically to apply settings.
Let�s login to Node-B and take a look at the redundancy status. First you�ll notice the (standby) trailing the switch name. This tells you right away the HA pair was formed. We can also see this by running the following command.
>WWTLab_1000v(standby)#show system redundancy status
Let�s jump back on Node-A and look at the redundancy status. Notice it now shows the other supervisor (sup-2) and as being in Standby.
>WWTLab_1000v#show system redundancy status
Basic configuration of the 1000v HA pair is now complete including the linking to vCenter. Quick and painless huh?!Uplink Configurations�
For the next phase of the install we�re going to create one Uplink profile to be used to carry traffic outside of the 1000v. This is used to carry VM traffic, IP storage, etc. It should be noted that multiple uplinks can be specified to segregate traffic and is typically the case in a production environment. In this example, I�m just creating one that will carry all traffic. We�ll want to create this uplink prior to joining ESX hosts to the 1000v as we�ll later see it�ll ease the configuration of things.
Let�s start by SSH to the 1000v. Since we have an HA pair, get in a habit of just SSH to the specified IP address. You�ll be automatically connected to the active node. As configuration changes occur, they are automatically replicated to the secondary node.
To create the Uplink Port-Profile, we�ll use the following.
>WWTLab_1000v#config t
>WWTLab_1000v(config)#port-profile type ethernet Uplink
>WWTLab_1000v(config-port-prof)# vmware port-group
>WWTLab_1000v(config-port-prof)# switchport mode trunk
>WWTLab_1000v(config-port-prof)# switchport trunk allowed vlan 70
>WWTLab_1000v(config-port-prof)# channel-group auto mode on mac-pinning
>WWTLab_1000v(config-port-prof)# system-vlan 70
>WWTLab_1000v(config-port-prof)# no shut
>WWTLab_1000v(config-port-prof)# state enabled
>WWTLab_1000v(config-port-prof)# copy run start
A couple things to call out in the above.- The type Ethernet denotes that we will be attaching physical NICs to this profile and to thus carry traffic north of the 1000v.
- The channel-group auto mode on mac-pinning is to be utilized when attaching multiple NICs to the uplink. There are posts out there going into detail on what this specifically does but quickly, this will tie a VM to one leg of the uplink in order to not create a loop. It should be considered a must have and be utilized on all uplinks when multiple NICs are to be attached.
- The system-vlan command is utilized for an interesting use. Again, lots of posts out there on what exactly this should be used for but simply put, if the 1000v will be the only switch used in the virtual environment (i.e. no standard virtual switches) and/or traffic such as ESX management and IP storage routed through it, system VLANs should be specified for those VLANs carrying the following traffic:
- ESX Management
- IP based storage
- Control and Packet
Back in vCenter, lets go to the Networking configuration section.
Click on the Summary tab and click on Add Host.
As you can see, I�ve chosen an ESX host and a corresponding vmnic. Make sure to choose the corresponding DVUplink port group. You should see �Uplink� from when we created it earlier.
Click Next to continue.
Click Finish to join the host.
When complete and if successful, you should now see the host under the Hosts tab. Continue the above process until all needed ESX hosts are joined successfully.
Conclusion�
If you�re still with me at this point, congratulations. Cisco�s Nexus 1000v has been deployed in an HA pair, an Uplink to carry traffic has been specified, and we�ve joined an ESX host to the 1000v so we can start utilizing it.
With the addition of the GUI wizard and OVF Template import, deployment of the 1000v has been immensely simplified. Gone are the days of manually configuring VM�s and installing the 1000v from an ISO. Also, simplified is the connection process between vCenter and the 1000v. All great enhancements to an already great product!
What�s to do next? Well, the creation of additional port-profiles to attach virtual machines or vmkernels would be a good place to head. Once that is complete, you can then begin the migration of current virtual machines over to the 1000v along with vmkernels if you so choose.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment