Friday 30 December 2016

My practise list

Practise list:

1.deploy esx
2.deploy window 2012 server on esx with adds and dns
3.integrate DNS server with ESX server  sunil@vmware.lab.com
4.deploy exhange DAG cluster node.
5.deploy linux clustering
6.deploy mysql and mongodb clustering
7.deploy SKY with others
8.deploy veeam backup with windows or linux server
9.deploy EMC networker with windows or linux
10.check how node works If one the node is down for all.
11.integrate linux dns sever with ESX server.
12.deploy Report manager with SKy.
13.deploy AGM
14.how to configure and use ILO
15.Configure DHCP from windows and linux servers.
16.How to use Free Nas
17.Install and configure ISCSI using Free Nas as storage lun
18.Nmap commands
19.TCPdump commands
20.ethtool commands
21.vmware deployment tool deploying also with scripting deployment
22.accessing windows machine on imm or ilo
23.deploying Zmanda  server
24.using VMWare from cli i.e vmshell
25.windows 2012 roles and how to configure them and use them.
26.installing Xen server
27.installing hyper v server
28.installing ovm
29.installing RHEVM
30.how to download supported hardware vmware image file.
31.How To Use Windows Server as a Network Router {https://redmondmag.com/articles/2015/04/23/windows-server-as-a-network-router.aspx}
32.installing esx 6 with vcenter 6
33.creating switches and bonding in esxi networking.
34.creating vm with RDM
35.understanding openfiler completely.
36.usage of netcat
37.p2v and v2v
38.creating standard and distributed switches
39.vmware view (horizon) vmware thinapp and vmware vcloud
40. DRS and HA
41. DRS and HA with FT
42. FT on a vm
43. clustering without DRS and HA
44.

wants:
AIX & solaris
clustering and load balancing in linux windows and vmware
shell scripting
python
CIFS NFS protocol
ISM certification
vmware certification
fabric administration
cloud
CEH
OSCP
CISSP




Read documents:
1.ISM document
2.vmware 5.5 and 6.0 document
3.linux administration documents
4.Actifio cli reference guide
5.


Look after for videos on
1.installing and configuring Fc switch
2.FC connections b/w severs and storage
3.how to install storage
4.Ethical hacking
5.penetration testing
6.Oscp
7.python scripting
8.windows scripting
9.linux scripting



refer concepts
1.ADDS
2.DNS
3.IIS
4.WDS
5.NPAS
6.Application server
7.linux server concepts >> Dhcp dns webserver cifs nfs squid clustering
8.linux ldap
9.

Thursday 22 December 2016

Difference between mtime, ctime and atime



A common mistake is that ctime is the file creation time. This is not correct, it is the inode/file change time. mtime is the file modification time. A often heard question is "What is the ctime, mtime and atime?".This is confusing so let me explain the difference between ctime, mtime and atime.

ctime

ctime is the inode or file change time. The ctime gets updated when the file attributes are changed, like changing the owner, changing the permission or moving the file to an other filesystem but will also be updated when you modify a file.

mtime

mtime is the file modify time. The mtime gets updated when you modify a file. Whenever you update content of a file or save a file the mtime gets updated.

Most of the times ctime and mtime will be the same, unless only the file attributes are updated. In that case only the ctime gets updated.

atime

atime is the file access time. The atime gets updated when you open a file but also when a file is used for other operations like grep, sort, cat, head, tail and so on.

In the descriptions, wherever n is used as a primary argument, it shall be interpreted as a decimal integer optionally preceded by a plus ( '+' ) or minus-sign ( '-' ) sign, as follows:
+n More than n.
  n Exactly n.
-n Less than n.

At the given time (2014-09-01 00:53:44 -4:00, where I'm deducing that AST is Atlantic Standard Time, and therefore the time zone offset from UTC is -4:00 in ISO 8601 but +4:00 in ISO 9945 (POSIX), but it doesn't matter all that much):
1409547224 = 2014-09-01 00:53:44 -04:00
1409457540 = 2014-08-30 23:59:00 -04:00
so:
1409547224 - 1409457540 = 89684
89684 / 86400 = 1
Even if the 'seconds since the epoch' values are wrong, the relative values are correct (for some time zone somewhere in the world, they are correct).

The n value calculated for the 2014-08-30 log file therefore is exactly 1 (the calculation is done with integer arithmetic), and the +1 rejects it because it is strictly a > 1 comparison (and not >= 1).

"-mtime -2" means files that are less than 2 days old, such as a file that is 0 or 1 days old.

"-mtime +2" means files that are more than 2 days old... {3, 4, 5, ...}


The argument to -mtime is interpreted as the number of whole days in the age of the file. -mtime +n means strictly greater than, -mtime -n means strictly less than.
Note that with Bash, you can do the more intuitive:
$ find . -mmin +$((60*24))
$ find . -mmin -$((60*24))
to find files older and newer than 24 hours, respectively.

Thursday 15 December 2016

what is vStorage APIs for Array Integration (VAAI) and how it is useful in thinprovisioning vms and retaining space by umap option in vmware

vStorage API for Array Integration (VAAI) is an application program interface (API) framework from VMware that enables certain storage tasks, such as thin provisioning, to be offloaded from the VMware server virtualization hardware to the storage array.


Offloading these tasks lessens the processing workload on the virtual server hardware. For a storage administrator to make use of VAAI, the manufacturer of his storage system must have built support for VAAI into the storage system.

Introduced in vSphere 4 with support for block-based (Fibre Channel or iSCSI) storage systems, VAAI consisted of a number of primitives, or parts.

 “Copy offload” enables the storage system to make full copies of data within the array, offloading that chore from the ESX server. “Write same offload” enables the storage system to zero out a large number of data blocks to speed the provisioning of virtual machines (VMs) and reduce I/O.

Hardware-assisted locking allows vCenter to offload SCSI commands from the ESX server to the storage system so the array can control the locking mechanism while the system does data updates.
In vSphere 5, vStorage APIs for Array Integration were enhanced. The most notable new functionality addresses thin provisioning of storage systems and expands support to network-attached storage (NAS) devices.

VAAI’s thin provisioning enhancements allow storage arrays that use thin provisioning to reclaim blocks of space when a  virtual disks is deleted, to mitigate the risk of a thinly provisioned storage array running out of space. Thin provisioned storage systems that support vSphere 5’s VAAI are given advance warnings when space thresholds are reached.

In addition, in that version, VAAI enables mechanisms to temporarily pause virtual machines when space runs out, giving admins time to add storage or migrate the virtual machine to a different array.
VAAI’s “hardware acceleration of NAS” component has two primitives, according to VMware.

Full file clone enables the NAS device to clone virtual disks to speed the process of creating virtual machines on NAS systems. “Reserve space” enables the creation of a thick virtual disk on a NAS device.

Qualified storage array vendors can partner with VMware to develop the firmware and plug-ins required by their arrays.

IBM SVC support with Unmap lun option for thin provisioning in VMware

IBM SVC definitely does NOT support SCSI unmap and we would not pass them through to back end disk if we got them.
Actually if SVC layer sees an unmap command it actually ignores it.
IBM SVC will do zero-detect where if a write contains nothing but zeros and the target disk is space efficient,  then the write will not be destined to disk.   Instead the B-Tree will be updated to say those grains are now free, but the grains are not released, meaning no space is reclaimed.   The only way to reclaim space is to do a VDisk copy (which is way too much work).

in-Band LUNs are image mode, so if VMware DID write zeros, they would be written to disk by SVC layer (rather than be dropped).
I don't know how 3PAR zero detect works, but maybe it releases space immediately (rather than just do b-tree update).

If so,   that would have worked OOB as well as IB.

Using the esxcli storage vmfs unmap command to reclaim VMFS deleted blocks on thin-provisioned LUNs (2057513)

This article provides steps to reclaim unused storage blocks on a VMFS datastore for a thin-provisioned device using the esxcli storage vmfs unmap command.
 
vSphere 5.5 introduced a new command in the esxcli namespace that allows deleted blocks to be reclaimed on thin provisioned LUNs that support the VAAI UNMAP primitive.

The command can be run without any maintenance window, and the reclaim mechanism has been enhanced as such:
  • Reclaim size can be specified in blocks instead of a percentage value to make it more intuitive to calculate.
  • Dead space is reclaimed in increments instead of all at once to avoid possible performance issues.
With the introduction of 62 TB VMDKs, UNMAP can now handle much larger dead space areas. However, UNMAP operations are still manual. This means Storage vMotion or Snapshot Consolidation tasks on VMFS do not automatically reclaim space on the array LUN.

Note: The vmkfstools -y command is deprecated in ESXi 5.5. For more information on reclaiming space in vSphere 5.0 and 5.1, see Using vmkfstools to reclaim VMFS deleted blocks on thin-provisioned LUNs (2014849).

Resolution

To reclaim unused storage blocks on a VMFS datastore for a thin-provisioned device, run this command:

esxcli storage vmfs unmap --volume-label=volume_label|--volume-uuid=volume_uuid --reclaim-unit=number
The command takes these options:
  • -l|--volume-label=volume_label

    The label of the VMFS volume to UNMAP. This is a mandatory argument. If you specify this argument, do not use -u|--volume-uuid=volume_uuid.

  • -u|--volume-uuid=volume_uuid

    The UUID of the VMFS volume to UNMAP. This is a mandatory argument. If you specify this argument, do not use -l|--volume-label=volume_label.

  • -n|--reclaim-unit=number

    The number of VMFS blocks to UNMAP per iteration. This is an optional argument. If it is not specified, the command uses a default value of 200.
For example, for a VMFS volume named MyDatastore with UUID of 509a9f1f-4ffb6678-f1db-001ec9ab780e, run this command:

esxcli storage vmfs unmap -l MyDatastore

or

esxcli storage vmfs unmap -u 509a9f1f-4ffb6678-f1db-001ec9ab780e
 
Notes:
  • The default value of 200 for the -n number or --reclaim-unit=number argument is appropriate in most environments, but some array vendors may suggest a larger or smaller value depending on how the array handles the SCSI UNMAP command.

  • Similar to the previous vmkfstools -y method, the esxcli storage vmfs unmap command creates temporary hidden files at the top level of the datastore, but with names using the .asyncUnmapFile pattern. By default, the space reservation for the temporary files depends on the block size of the underlying VMFS file system (the default is --reclaim-unit=200):

    • 200 MB for 1 MB block VMFS3 / VMFS5
    • 800 MB for 4 MB block VMFS3
    • 1600 MB for 8 MB block VMFS3

  • Depending on the use case, an administrator can select a different --reclaim-unit value, for example if the reserved size is considered to be too large or if there is a danger that the UNMAP primitive may not be completed in a timely manner when offloaded to an array. VMware recommends that vSphere administrators consult with their storage array providers on the best value or best practices when manually defining a --reclaim-unit value.

  • If the UNMAP operation is interrupted, a temporary file may be left on the root of a VMFS datastore. However, when you run the command against the datastore again, the file is deleted when the command completes successfully. The .asyncUnmapFile will never grow beyond the --reclaim-unit size.

  • The UNMAP operation may finish without doing anything or fail if the volume partition table and/or the block alignment is incorrect, due to upgrading a VMFS3 files system or repartitioning the volume with a 3rd party tool. For more information, see Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work (2048466).

  • If the UNMAP operation fails and you see an error about locked files or resource busy, see:

Understanding Storage Device Naming


Each storage device, or LUN, is identified by several names.
Depending on the type of storage, the ESXi host uses different algorithms and conventions to generate an identifier for each storage device.
SCSI INQUIRY identifiers.
The host uses the SCSI INQUIRY command to query a storage device and uses the resulting data, in particular information, to generate a unique identifier. Device identifiers that are unique across all hosts, persistent, and have one of the following formats:
naa.number
t10.number
eui.number
These formats follow the T10 committee standards. See the SCSI-3 documentation on the T10 committee Web site.
Path-based identifier.
When the device does not provide the Page 83 information, the host generates an mpx.path name, where path represents the path to the device, for example, mpx.vmhba1:C0:T1:L3. This identifier can be used in the same way as the SCSI INQUIRY identifies.
The mpx. identifier is created for local devices on the assumption that their path names are unique. However, this identifier is neither unique nor persistent and could change after every boot.
In addition to the SCSI INQUIRY or mpx. identifiers, for each device, ESXi generates an alternative legacy name. The identifier has the following format:
vml.number
The legacy identifier includes a series of digits that are unique to the device and can be derived in part from the Page 83 information, if it is available. For nonlocal devices that do not support Page 83 information, the vml. name is used as the only available unique identifier.
You can use the esxcli --server=server_name storage core device list command to display all device names in the vSphere CLI. The output is similar to the following example:
# esxcli --server=server_name storage core device list
naa.number
 Display Name: DGC Fibre Channel Disk(naa.number)
 ... 
 Other UIDs:vml.number
In the vSphere Client, you can see the device identifier and a runtime name. The runtime name is generated by the host and represents the name of the first path to the device. It is not a reliable identifier for the device, and is not persistent.
Typically, the path to the device has the following format:
vmhbaAdapter:CChannel:TTarget:LLUN
vmhbaAdapter is the name of the storage adapter. The name refers to the physical adapter on the host, not to the SCSI controller used by the virtual machines.
CChannel is the storage channel number.
Software iSCSI adapters and dependent hardware adapters use the channel number to show multiple paths to the same target.
TTarget is the target number. Target numbering is determined by the host and might change if the mappings of targets visible to the host change. Targets that are shared by different hosts might not have the same target number.
LLUN is the LUN number that shows the position of the LUN within the target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage adapter vmhba1 and channel 0.

Determining if an array supports automated unmap in vSphere 6.5


Many of you will be aware of the new core storage features that were introduced in vSphere 6.5. If not, you can learn about them in this recently published white paper. Without doubt, the feature that has created the most amount of interest is automated unmap (finally, I hear you say!). Now a few readers have asked about the following comment in the automated unmap section. 
 
Automatic UNMAP is not supported on arrays with UNMAP granularity 
greater than 1MB. Auto UNMAP feature support is footnoted in the 
VMware Hardware Compatibility Guide (HCL).

So where do you find this info in the HCL? I’ll show you here.
First, navigate to the VMware HCL – Storage/SAN Section. Click here for a direct link. 

Next select ESXi 6.5, the vendor of the array, and in the features section, Thin Provisioning. Finally click on “Update and View Results”. This will provide a list of supported arrays and models. In this example, I selected DELL as a vendor. It produced a range of Compellent and EQL arrays. I choose Compellent Storage Center with Array Type FC, as shown below:

vcg-compellent-fcNow, if I expand on the 6.5 supported release, I can see the tested configurations, including the supported driver, firmware, multipathing and fail-over policies associated with this array config.
vcg-compellent-fc-configs 
If I expand any of those entries, I will see the footnote entries. In this example, let me select the FW version 7.1, with 16GB FC Switch and lpfc HBA.
vcg-fc-footnotes 
And there you can see the statement in the footnotes that this configuration support automatic storage space reclamation on VMFS6.
Let’s compare this to an iSCSI configuration using the same arrays. Here is the list of iSCSI Compellent configurations, once more detailing the supported driver, firmware, multipathing and fail-over policies associated with this array config.
vcg-compellent-configs 
Let’s select the 7.1 firmware version as before, and examine the footnotes:
vcg-iscsi-footnotes 
As you can clearly see, there is no mention of automatic unmap/space reclamation support in the footnotes in this example. Therefore unsupported.
Now this will continue to change, so check back regularly to see whether or not your array vendor has passed the certification tests required to support automated unmap. Alternatively, speak to your array vendor to find out where they are on the certification path.

Cannot open Virtual Machine Console

  • When you try to connect to a virtual machine console from VirtualCenter, you see one or more of these errors:
    • Error connecting: Host address lookup for server <SERVER> failed: The requested name is valid and was found in the database, but it does not have the correct associated data being resolved for Do you want to try again?
    • Error connecting: cannot connect to host <host>: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. Do you want to try again?
    • Error connecting: You need execute access in order to connect with the VMware console. Access denied for config file.
    • Unable to connect to MKS: failed to connect to server IP:903. For more information, see ESX 4.0 hosts lose network connectivity when multiple service console interfaces are configured on subnets that use DHCP IP addresses (1010828).
  • You cannot open a remote console to a virtual machine.
  • Virtual machine console is black (blank).
  • The VMware Infrastructure (VI) Client console tab session may time out or disconnect while in use.
  • Migration of virtual machines using vMotion failed.
Solution
If your network is configured such that a firewall exists between the ESX host and the client running the workstation running VI Client, you might not be able to open a virtual machine console. To connect to a virtual machine console from VI Client, port 903 needs to be open in any firewall between the the workstation running VI Client and the ESX host. This applies even if VI Client is connected to VirtualCenter and not directly to ESX host.
Note: Before performing the steps in this article, please refer to Restarting the Management agents on an ESX Server (1003490) for important information on restarting the Management agents.
To troubleshoot this issue:
1. Log in to the VirtualCenter Server directly through Terminal Services or a Remote KVM and attempt a connection from VI Client from this system. If this method works, the firewall is likely preventing the console from working. Configure your firewall to allow communications on port 903 between the ESX host and the workstation running VI Client.
If port 903 is not open or cannot be opened in your environment, enable the vmauthd proxy. This forces remote console communication to be sent on port 902 on the Service Console, instead of 903.
Note: By enabling this setting there may be degradation in performance communicating to the ESX host service console, if remote consoles are heavily utilized.
To enable the proxy:
a. Log in to the ESX host’s service console as root.
b. Open /etc/vmware/config with a text editor.
c.  Add the following line:
vmauthd.server.alwaysProxy = “TRUE”
d. Issue the following command to restart xinetd:
service xinetd restart
2. Verify the ESX firewall policy.  For more information, see Troubleshooting the firewall policy on an ESX Server (1003634).
3. Verify that the ESX host and the workstation running VI Client are correctly synced to an NTP service. This is required to satisfy SSL handshaking between VI Client and ESX. For more information, seeVerifying time synchronization across environment (1003736).
4. DNS problems are a common cause of virtual machine console problems. Verify name resolution in your environment. For more information, see:
 After verifying DNS, open a command prompt on the VI Client machine and perform the following:
ipconfig /flushdns
ipconfig /registerdns 
           
 Verify /var partition is not full.
 Verify that the permissions for the virtual machine’s .vmx file are set correctly.             
              chmod 755 </full/path/to/virtual machine.vmx>
 If your ESX host has more than one service console configured, verify that they are not on the same network.

Logs location



/var/log/vmware/vpx — To check the logs for vpxa gents

/var/log/vmware — To check the logs on hostd

/var/log
To check the logs on vmkernel
To check the logs on vmkwarning
To check the logs on vmksummary
To check the logs for messages

Host related issue on vmware


To check if the VM is registered to the host
# vmware-cmd -l
If the VM is registered you will get similar output
#/vmfs/volumes/storage/vmfolder/vm.vmx
To check if the hostd service status and to restart
#service mgmt-vmware status /restart
If the hostd service is not responding
Check for the hostd process ID and Kill the process
#ps –auxwww | grep –i hostd
#Kill -9 (process id)
Also remove the below PID
#rm /var/run/vmware/vmware-hostd.PID
Start the hostd service
#service mgmt-vmware start

Identifying disks when working with VMware ESX


Identifying disks when working with VMware ESX
Purpose
When performing troubleshooting with ESX storage, you may use command line tools which require you to identify a specific disk or LUN connected to ESX. This article explores different ways to identify these disks.
Resolution
ESX 3.X
Use these commands to collect disk and LUN information from within ESX.
  • The command esxcfg-mpath -l generates a compact list of the LUNs currently connected to the ESX host.The output appears similar to:
Disk vmhba32:0:0/vmfs/devices/disks/vml.020000000060060160c0521501065cacf13f9fdd11524149442035 (512000MB) has 2 paths and policy of Most Recently Used
iScsi sw iqn.1998-01.com.vmware:esxhost-41e85afe<->iqn.1992-04.com.iscsi:a0vmhba32:0:0by preferred
iScsi sw iqn.1998-01.com.vmware:esxhost-41e85afe<->iqn.1992-04.com.iscsi:b0vmhba32:1:0tive
  • The command esxcfg-vmhbadevs -m generates a compact list of the LUNs currently connected to the ESX host.The output appears similar to:
vmhba1:0:0:3 /dev/sda3 48f85575-5ec4c587-b856-001a6465c102
vmhba2:0:4:1 /dev/sdc1 48fbd8e5-c04f6d90-1edb-001cc46b7a18
vmhba2:0:3:1 /dev/sdb1 48fbd8be-b9638a60-aa72-001cc46b7a18
vmhba32:0:1:1 /dev/sde1 48fe2807-7172dad8-f88b-0013725ddc92
vmhba32:0:0:1 /dev/sdd1 48fe2a3d-52c8d458-e60e-001cc46b7a18
  • The command ls -alh /vmfs/devices/disks lists the possible targets for certain storage operations.The output appears similar to:
lrwxrwxrwx 1 root root 58 Oct 16 12:54 vmhba2:0:3:0 ->vml.0200030000600805f300124a90ca40a0bcd05c00294d5341313030
lrwxrwxrwx 1 root root 60 Oct 16 12:54 vmhba2:0:3:1 ->vml.0200030000600805f300124a90ca40a0bcd05c00294d5341313030:1
lrwxrwxrwx 1 root root 58 Oct 16 12:54 vmhba2:0:4:0 ->vml.0200040000600805f300124a9006d5bbdeb08b002a4d5341313030
lrwxrwxrwx 1 root root 60 Oct 16 12:54 vmhba2:0:4:1 ->vml.0200040000600805f300124a9006d5bbdeb08b002a4d5341313030:1
lrwxrwxrwx 1 root root 58 Oct 16 12:54 vmhba2:1:3:0 ->vml.0200030000600805f300124a90ca40a0bcd05c00294d5341313030
lrwxrwxrwx 1 root root 60 Oct 16 12:54 vmhba2:1:3:1 ->vml.0200030000600805f300124a90ca40a0bcd05c00294d5341313030:1
lrwxrwxrwx 1 root root 58 Oct 16 12:54 vmhba2:1:4:0 ->vml.0200040000600805f300124a9006d5bbdeb08b002a4d5341313030
lrwxrwxrwx 1 root root 60 Oct 16 12:54 vmhba2:1:4:1 ->vml.0200040000600805f300124a9006d5bbdeb08b002a4d5341313030:1
The following are definitions for some of the identifiers and their conventions:
  • vmhba<Adapter>:<Target>:<LUN>This identifier can be used to identify either a LUN or a path to the LUN. When ESX detects that paths associated to one LUN, each path is assigned this identifier. The entire LUN then inherits the same name as the first path. When using this identifier for an entire LUN, the identified is called the canonical name. When this identifier is used for a path it is called the path name.  These naming conventions may vary from ESX host to ESX host, and may change if storage hardware replaced. This identifier is generally used for operations with utilities such as vmkfstools.

    Example: vmhba1:0:0 = Adapter 1, Target 0, and LUN 0.
  • vmhba<Adapter>:<Target>:<LUN>:<Partition>This identifier is used in the context of a canonical name and is used to identify a partition on the LUN or disk. In addition to the canonical name, there is a :<Partition> appended to the end of the identifier. The <Partition> represents the partition number on the LUN or Disk. If the <Partition>is specified as 0, then it identifies the entire disk instead of only one partition. These naming conventions may vary from ESX host to ESX host, and may change if storage hardware replaced.  This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:0:0:3 = Adapter 1, Target 0, LUN 0, and Partition 3.
  • vml.<VML> or vml.<VML>:<Partition>The VML Identifier can be used interchangeably with the canonical name. Appending the:<Partition> works in the same way described above. This identifier is generally used for operations with utilities such as vmkfstools.
  • /dev/sd<Device Letter> or /dev/sd<Device Letter><Partition>This naming convention is not VMware specific. This convention is used exclusively by the service console and open source utilities which come with the service console. The <Device Letter>represents the LUN or Disk and is assigned by the service console during boot. The optional<Partition> represents the partition on the LUN or disk.  These naming conventions may vary from ESX host to ESX host, and may change if storage hardware replaced.  This identifier is generally used for operations with utilities such as fdisk and dd.
Note: VMware ESXi does not have a service console; disks are refered to by the VML Identifier.
  • <UUID>The <UUID> is a unique number assigned to a VMFS volume upon the creation of the volume. It may be included in syntax where you need to specify the full path of specific files on a datastore.
ESX 4.X
Use these commands to collect disk and LUN information from within ESX:
  • The command esxcfg-mpath -b generates a compact list of LUNs currently connected to the ESX host.The output appears similar to:
naa.6090a038f0cd4e5bdaa8248e6856d4fe : EQLOGIC iSCSI Disk (naa.6090a038f0cd4e5bdaa8248e6856d4fe)
vmhba33:C0:T1:L0 LUN:0 state:active iscsi Adapter: iqn.1998-01.com.vmware:bs-tse-i137-35c1bf18 Target: IQN=iqn.2001-05.com.equallogic:0-8a0906-5b4ecdf03-fed456688e24a8da-bs-tse-vc40-250g Alias= Session=00023d000001 PortalTag=1
  • The command esxcfg-scsidevs -l generates a list of LUNs currently connected to the ESX host.The output appears similar to:
mpx.vmhba0:C0:T0:L0
Device Type: Direct-Access
Size: 139890 MB
Display Name: Local ServeRA Disk (mpx.vmhba0:C0:T0:L0)
Plugin: NMP
Console Device: /dev/sdb
Devfs Path: /vmfs/devices/disks/mpx.vmhba0:C0:T0:L0
Vendor: ServeRA Model: 8k-l Mirror Revis: V1.0
SCSI Level: 2 Is Pseudo: false Status: on
Is RDM Capable: false Is Removable: false
Is Local: true
Other Names:
vml.0000000000766d686261303a303a30
  • The command ls -alh /vmfs/devices/disks lists the possible targets for certain storage operations.The output appears similar to:
lrwxrwxrwx 1 root root 19 Oct 16 13:00 vml.0000000000766d686261303a303a30 -> mpx.vmhba0:C0:T0:L0
lrwxrwxrwx 1 root root 21 Oct 16 13:00 vml.0000000000766d686261303a303a30:1-> mpx.vmhba0:C0:T0:L0:1
lrwxrwxrwx 1 root root 21 Oct 16 13:00 vml.0000000000766d686261303a303a30:2-> mpx.vmhba0:C0:T0:L0:2
lrwxrwxrwx 1 root root 21 Oct 16 13:00 vml.0000000000766d686261303a303a30:3-> mpx.vmhba0:C0:T0:L0:3
lrwxrwxrwx 1 root root 21 Oct 16 13:00 vml.0000000000766d686261303a303a30:5-> mpx.vmhba0:C0:T0:L0:5
lrwxrwxrwx 1 root root 36 Oct 16 13:00vml.020000000060060160b4111600624c5b749c7edd11524149442035 ->naa.60060160b4111600624c5b749c7edd11
lrwxrwxrwx 1 root root 38 Oct 16 13:00vml.020000000060060160b4111600624c5b749c7edd11524149442035:1 ->naa.60060160b4111600624c5b749c7edd11:1
The following are definitions for some of identifiers and their conventions:
  • naa.<NAA>NAA stands for Network Addressing Authority identifier. The number is guaranteed to be unique to that LUN. The NAA identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA is unique to the LUN, if the LUN is presented the same way across all ESX hosts, the NAA identifier remains the same.
  • naa.<NAA>:<Partition>The <Partition> represents the partition number on the LUN or Disk. If the <Partition> is specified as 0, it identifies the entire disk instead of only one partition. This identifier is generally used for operations with utilities such as vmkfstools.
Example: naa.6090a038f0cd4e5bdaa8248e6856d4fe:3 = Partition 3 of LUNnaa.6090a038f0cd4e5bdaa8248e6856d4fe.
  • mpx.vmhba<Adapter>:C<Channel>:T<Target>:L<LUNormpx.vmhba<Adapterormpx.vmhba<Adapter>:C<Channel>:T<Target>:L<LUN>:<Partition>Somehe NAA number described above.  In these circumstances, an MPX Identifier is generated by ESX to represent the LUN or disk. The identifier takes the form similar to that of the canonical name of previous versions of ESX with the mpx. prefix.  This identifier can be used in the exact same way as the NAA Identifier described above.
  • vml.<VML> or vml.<VML>:<Partition>The VML Identifier can be used interchangeably with the NAA Identifier and the MPX Identifier. Appending:<Partition> works in the same way described above. This identifier is generally used for operations with utilities such as vmkfstools.
  • vmhba<Adapter>:C<Channel>:T<Target>:L<LUN>This identifier is now used exclusively to identify a path to the LUN. When ESX detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.
  • /dev/sd<Device Letter> or /dev/sd<Device Letter><Partition>This naming convention is not VMware specific. This convention is used exclusively by the service console and open source utilities which come with the service console. The <Device Letter>represents the LUN or Disk and is assigned by the service console during boot. The optional<Partition> represents the partition on the LUN or disk. These naming conventions may vary from ESX host to ESX host and may change if storage hardware replaced. This identifier is generally used for operations with utilities such as fdisk and dd.
Note: VMware ESXi does not have a service console; disks are referred to by the VML Identifier.

vSphere 5.5 Storage Enhancement Part 7 – LUN ID/RDM Restriction Lifted

Raw Device Mappings (RDM) continued to rely on LUN IDs, and that if you wished to successfully vMotion a virtual machine with an RDM from one host to another host, you had to ensure that the LUN was presented in a consistent manner (including identical LUN IDs) to every host that you wished to vMotion to.

I recently learnt that this restriction has been lifted in vSphere 5.5. To verify, I did a quick test, presenting the same LUN with a different LUN ID to two different hosts, using that LUN as an RDM, and then seeing if I could successfully vMotion the VM between those hosts. As my previous blog shows, this failed to pass compatibility tests in the past. Here are the results of my new tests with vSphere 5.5.


For this test, I used an EMC VNX array which presented LUNs over Fiber Channel to my ESXi hosts. Using EMC’s Unisphere tool, I create two storage groups. The first storage group (CH-SG-A) contained my first host, and the LUN was mapped with a Host LUN ID of 200 – this is the id that the ESXi host see the LUN.
Unisphere LUN 200 
My next storage group (CH-SG-B) contained my other host and the same LUN, but this time the Host LUN ID as set to 201.
Unisphere LUN 201 
Once the LUNs were visible on my ESXi hosts, it was time to map the LUN as an RDM to one of my virtual machines. The VM was initially on my first host, where the LUN ID appeared as 200. I mapped the LUN to my VM:
RDM Mapping LUN ID 200 
I proceeded with the vMotion operation. Previously version of vSphere would have failed this at the compatibility test step, as per my previous blog post. However, this time on vSphere 5.5, the compatibility test succeeded even though the LUN backing the RDM has a different LUN ID at the destination host. I completed the vMotion wizard, selecting the host that also had the LUN mapped (you can only choose an ESXi host that has the LUN mapped) and the operation succeeded. I then examined the virtual machine at the destination, just to ensure that everything had worked as expected, and when I looked at the multipath details of the RDM, I could see that it was successfully using the RDM on LUN ID 201 on the destination ESXi host:
RDM LUN ID at destination 

So there you have it. The requirement to map all LUNs with the same ID to facilitate vMotion operations for VMs with RDMs has been lifted. A nice feature in 5.5.
Note: While a best practice would be to present all LUNs with the same LUN ID to all hosts, we have seen issues in the past where this was not possible. The nice thing is that this should no longer be a concern.

Some Useful Storage and Vmware Commands


#esxcfg-scsidevs -c
And to check about a specific LUN,
#esxcfg-scsidevs -c | grep naa.id
To find the unique identifier of the LUN,  you may run this command:
# esxcfg-scsidevs -m
To find associated datastore using a LUN id
#esxcfg-scsidevs -m|grep naa.id
To get a list of RDM disks, you may run following command,
#find /vmfs/volumes/ -type f -name ‘*.vmdk’ -size -1024k -exec grep -l ‘^createType=.*RawDeviceMap’ {} \; > /Datastore123/rdmsluns.txt  This command will save the list of all RDM disk to a text file rdmluns.txt and save it to Datastore123.
Now Run following command to find the associated LUNs,
#for i in `cat /tmp/rdmsluns.txt`; do vmkfstools -q $i; done
This command will give you the vml.id of rdm luns,
Now use following cmd to map vml.id to naa.id
#ls -luth /dev/disks/ |grep vml.id    in output of this command you will get LUN id/naa.id.
To mark an RDM device as perennially reserved:
#esxcli storage core device setconfig -d naa.id –perennially-reserved=true   you may create an script to mark all RDMs as perennially reserved in one go.
Confirm that the correct devices are marked as perennially reserved by running this command on the host:
#esxcli storage core device list |less
To verify about an specific lun/device, run this command:
#esxcli storage core device list -d naa.id
The configuration is permanently stored with the ESXi host and persists across restarts.
To remove the perennially reserved flag, run this command
#esxcli storage core device setconfig -d naa.id –perennially-reserved=false
To obtain LUN multipathing information from the ESXi host command line:
To get detailed information regarding the paths.
#esxcli storage core path list
or To list the detailed information of the corresponding paths for a specific device,
#esxcli storage core path list -d naa.ID
To figure out if the device is managed by VMware’s native multipath plugin, the NMP or it is managed by a third-party plugin,
#esxcli storage nmp device list -d naa.id
This command not only confirms that the device is managed by NMP, but will also display the Storage Array Type Plugin (SATP) for path failover and the Path Selection Policy (PSP) for load balancing.
To list LUN multipathing information,
#esxcli storage nmp device list
To check the existing path selection policy
#esxcli storage nmp satp list
To change the multipathing policy
# esxcli storage nmp device set –device naa_id –psp path_policy
(VMW_PSP_MRU or VMW_PSP_FIXED or VMW_PSP_RR)
Note: These pathing policies apply to VMware’s Native Multipathing (NMP) Path Selection Plug-ins (PSP). Third-party PSPs have their own restrictions
To generate a list of all LUN paths currently connected to the ESXi host.
#esxcli storage core path list command
For the detail path information of a specific device
#esxcli storage core path list -d naa.id
To generate a list of extents for each volume and mapping from device name to UUID,
#esxcli storage vmfs extent list command
or To generate a compact list of the LUNs currently connected to the ESXi host, including VMFS version.
#esxcli storage filesystem list
To list the possible targets for certain storage operations,
#ls -alh /vmfs/devices/disks
To rescan all HBA Adapters,
#esxcli storage core adapter rescan –all
To rescan a specific HBA.
#esxcli storage core adapter rescan –adapter <vmkernel SCSI adapter name>  Where <vmkernel SCSI adapter name> is the vmhba# to be rescanned.
To get a list of all HBA adapters,
#esxcli storage core adapter list command
Note: There may not be any output if there are no changes.
To search for new VMFS datastores, run this command,
#vmkfstools -V
To check which VAAI primitives are supported.
#esxcli storage core device vaai status get -d naa.id
The esxcli storage san namespace has some very useful commands. In the case of fiber channel you can get information about which adapters are used for FC, and display the WWNN (nodename) and WWPN (portname) information, speed and port state
#esxcli storage san fc list
To display FC event information:
# esxcli storage san fc events get
VML ID
For example: vml.02000b0000600508b4000f57fa0000400002270000485356333630
Breaking apart the VML ID for a closer understanding: The first 4 digits are VMware specific and
the next 2 digits are the LUN identifier in hexadecimal.
In the preceding example, the LUN is mapped to LUN ID 11 (hex 0b).
NAA id
NAA stands for Network Addressing Authority identifier. EUI stands for Extended Unique Identifier. The number is guaranteed to be unique to that LUN.
The NAA or EUI identifier is the preferred method of identifying LUNs and the number is generated by the storage device. Since the NAA or EUI is unique to the LUN, if the LUN is presented the same way across all ESXi hosts, the NAA or EUI identifier remains the same.
Path Identifiervmhba<Adapter>:C<Channel>:T<Target>:L<LUN> 
This identifier is now used exclusively to identify a path to the LUN. When ESXi detects that paths associated to one LUN, each path is assigned this Path Identifier. The LUN also inherits the same name as the first path, but it is now used an a Runtime Name, and not used as readily as the above mentioned identifiers as it may be different depending on the host you are using. This identifier is generally used for operations with utilities such as vmkfstools.
Example: vmhba1:C0:T0:L0 = Adapter 1, Channel 0, Target 0, and LUN 0.
To determine the firmware for a Qlogic HBA on an ESXi/ESX host 5.1
(QLogic)
To determine the firmware for a QLogic fibre adapter, run these commands on the ESXi/ESX host:
Go to /proc/scsi/qla####.
Where #### is the model of the Qlogic HBA
Run the ls command to see all of the adapters in the directory.
The output appears similar to:
1 2 HbaApiNode
Run the command:
head -2 #
Where # is the HBA number.
You see output similar to:
QLogic PCI to Fibre Channel Host Adapter for QLA2340 :
Firmware version: 3.03.19, Driver version 7.07.04.2vmw
To determine the firmware for a QLogic iSCSI hardware initiator on an ESXi/ESX host:
Go to /proc/scsi/qla####.
Where #### is the model of the Qlogic HBA
Run the ls command to see all of the adapters in the directory.
You see output similar to:
1 2 HbaApiNode
Run the command:
head -4 #
Where # is the HBA number.
You see output similar to:
QLogic iSCSI Adapter for ISP 4022:
Driver version 3.24
Code starts at address = 0x82a314
Firmware version 2.00.00.45
(Emulex)
To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.1
Go to /proc/scsi/lpfc.
Note: the lpfc may be appended with model number appended. For example, /proc/scsi/lpfc820
Run the ls command to see all of the adapters in the directory.
You see output similar to:
1 2
Run the command:
head -5 #
where # is the HBA number.
You see output similar to:
Emulex LightPulse FC SCSI 7.3.2_vmw2
Emulex LP10000DC 2Gb 2-port PCI-X Fibre Channel Adapter on PCI bus 42 device 08 irq 42
SerialNum: BG51909398
Firmware Version: 1.91A5 (T2D1.91A5)
Notes:
To determine the firmware for a Emulex HBA on an ESXi/ESX host 5.5
In ESXi 5.5, you do not see native drivers in the /proc nodes. To view native driver details, run the command:
/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval –a
To Get Hardware Details
Information:
# esxcfg-info | less -I
Identify the SCSI shared storage devices with the following command:
For ESX/ESXi 4.x, ESXi 5.x and 6.0, run the command:
# esxcfg-scsidevs -l | egrep -i ‘display name|vendor’
Run this command to find additional peripherals and devices:
# lspci –vvv
Installation of VIB
#esxcli software vib install –d /vmfs/volumes/datastore_name/driver_file_name.zip
Removing of VIB
#esxcli software vib remove –n –f
ESX Monitoring Steps
Configure SNMP Communities
esxcli system snmp set –communities public
Configure the SNMP Agent to Send SNMP v1 or v2c Traps
If the SNMP agent is not enabled, enable it by typing
esxcli system snmp set –enable true
esxcli system snmp set –targets target.example.com@162/public
Send a test trap to verify that the agent is configured correctly by typing
esxcli system snmp test
The agent sends a warmStart trap to the configured target
Creating ESX Logs from Command Line
vm-support
Creating /var/tmp/esx-(Hostname).tgz
cp  /var/tmp/esx-Z2T3GBGLPLM26-2014-12-17–11.24.tgz /vmfs/volumes/glplx94_vmdata_iso_01/ESX_Logs
Rename the CTK.VMDK
Go to datastore
Go the machine folder
Rename the file mv xyz-ctk.vmdk xyz-ctk_old.vmdk
then power on the machine
Install VMware tools without Reboot
/s /v/qn ADDLOCAL=ALL REBOOT=ReallySuppress
To Read File in ESX
vi <filename>
  • Esc+ : + /+ <search keyword>
  • Use n to see next instance of search
  • For exiting the file use esc+:+q!
cat <filename> | grep –i <keyword>   
cat <filename> | grep –e<keyword> -e <keyword>
Less <filename>
Shift + G (To go to End)
To Read last 100 Lines of file
Tail -f <filename> -n 100
To get VM Snapshot Details
get-vm | get-snapshot | format-list vm,name,SizeMB,Created,IsCurrent | out-file c:\a.txt
To Get Array Details from ESXi 5.1
esxcli hpssacli cmd -q “controller all show status”
To Get VM Created Date
Get-VIEvent -maxsamples 10000 -Start (Get-Date).AddDays(-60) | where {$_.Gettype().Name-eq “VmCreatedEvent” -or $_.Gettype().Name-eq “VmBeingClonedEvent” -or $_.Gettype().Name-eq “VmBeingDeployedEvent”} |Sort CreatedTime -Descending |Select CreatedTime, UserName,FullformattedMessage | Format-Table –AutoSize
Find AMS Version
 #esxcli software vib list | grep ams

Powering on a virtual machine fails with the error: NVRAM write failure


Symptoms

  • Cannot power on a virtual machine
  • Powering on a virtual machine fails
  • You see the error:

    NVRAM write failure

Cause

This issue occurs if there are issues with the underlying storage, which results in subsequent writes to the NVRAM file to fail.
 
The NVRAM file of a virtual machine stores the state of the virtual machine's BIOS settings. When you modify the BIOS settings, ESXi makes the changes persistent by storing them in the NVRAM file. If there are any issues with the underlying datastore on which the virtual machine's configuration files are stored, it results in failures while writing the new BIOS settings to the NVRAM file of the virtual machine.

Resolution

To resolve this issue, check and resolve any problems with the underlying datastore. After the underlying datastore becomes healthy, a new NVRAM file is automatically created the next time the virtual machine is powered off and then powered on.
 
Note: Any custom BIOS settings that you had applied may be lost during the failure. You must reapply the settings after the storage problem is resolved.

ESXi Hosts Show Up as a VEM Module with All Zeros for UUIDs on the Nexus 1000v

We were troubleshooting an issue with the N1K, where random VMs would lose network connectivity. Upon running a “show mod” on the N1K we saw the following:
switch# show mod | no-more
Mod Ports Module-Type Model Status
--- ----- -------------------------------- ------------------ ------------
1 0 Virtual Supervisor Module Nexus1000V active *
3 248 Virtual Ethernet Module NA licensed
4 248 Virtual Ethernet Module NA licensed
5 248 Virtual Ethernet Module NA licensed

Mod Sw Hw
--- ---------------- ------------------------------------------------
1 4.2(1)SV1(4b) 0.0
3 4.2(1)SV1(4b) VMware ESXi 4.1.0 Releasebuild-348481 (2.0)
4 4.2(1)SV1(4b) VMware ESXi 4.1.0 Releasebuild-348481 (2.0)
5 4.2(1)SV1(4b) VMware ESXi 4.1.0 Releasebuild-348481 (2.0)

Mod MAC-Address(es) Serial-Num
--- -------------------------------------- ----------
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
4 02-00-0c-00-04-00 to 02-00-0c-00-04-80 NA
5 02-00-0c-00-05-00 to 02-00-0c-00-05-80 NA

Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
1 192.168.1.139 NA NA
3 192.168.1.134 00000000-0000-0000-0000-000000000000 localhost1.
4 192.168.1.136 00000000-0000-0000-0000-000000000000 localhost2.
5 192.168.1.137 42343a8f-65b9-e0ae-acf2-b6d4e3995147 localhost3.
Two of the VEM modules were showing up with the UUID value of all zeros. Running ‘vemcmd show card’ on the working host we saw the following:
~ # vemcmd show card | grep UUID
Card UUID type 2: 42343a8f-65b9-e0ae-acf2-b6d4e3995147
On one of the two non-working hosts, we saw the following:
~ # vemcmd show card | grep UUID
Card UUID type 2: 00000000-0000-0000-0000-000000000000
I then ran across this cisco forums page. And from that page:
startDpa calls a script in /opt/cisco/vXXX/nexus/vem-vXXX/shell/vssnet-functions and extracts the UUID from the ESXi host:
setBiosUuid() { local UUID UUID=$(esxcfg-info -u | awk ‘{print tolower($1)}’) if [ “${UUID}” != “” ] ; then doCommand ${VEMCMD} card uuid vmware ${UUID} fi }
So the UUID is obtained from “esxcfg-info -u”. Running that command on working and non-working hosts, I saw the following:
~ # esxcfg-info -u
42343A8F-65B9-E0AE-ACF2-B6D4E3995147
and from one of the non-working host:
~ # esxcfg-info -u
00000000-0000-0000-0000-000000000000
Looking over VMware KB 1006250, we see the following:
The UUID is read by the ESX host from the SMBIOS … … This UUID is not generated by VMware. It is unique to the hardware and is set in the BIOS by the vendor. The output of the dmidecode command may show other examples of missing data.
We were using ESXi and no ‘dmidecode’ utility is available. However we can use ‘vsish’ and ‘vim-cmd’ to query the same information. Here is output for a good host:
~ # vsish -e cat /hardware/bios/dmiInfo | head -5
System Information (type1) {
Product Name:R250-2480805W
Vendor Name:Cisco Systems Inc
Serial Number:FCH1551v06J
UUID:[0]: 0x42
and here is the vim-cmd results:
~ # vim-cmd hostsvc/hosthardware | grep uuid -B 6
(vim.host.HardwareInfo) {
dynamicType = ,
systemInfo = (vim.host.SystemInfo) {
dynamicType = ,
vendor = "Cisco Systems Inc",
model = "R250-2480805W",
uuid = "42343a8f-65b9-e0ae-acf2-b6d4e3995147",
And here were the results from a non-working host:
~ # vsish -e cat /hardware/bios/dmiInfo | head -5
System Information (type1) {
Product Name:
Vendor Name:
Serial Number:
UUID:[0]: 0x00
and the vim-cmd results:
~ # vim-cmd hostsvc/hosthardware | grep uuid -B 6
(vim.host.HardwareInfo) {
dynamicType = ,
systemInfo = (vim.host.SystemInfo) {
dynamicType = ,
vendor = "",
model = "",
uuid = "00000000-0000-0000-0000-000000000000",
So the BIOS of the Cisco Servers were not returning the smbios information. We rebooted the host and the issue persisted. We then powered off the host and unplugged the power cables for 5 minutes and then powered it back on and the values showed up without issues. When the host came back up, the ‘show mod’ had the new modules connected but it also had the old 0’ed modules as well. It looks like this:
switch# show mod | no-more
...
...
Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
1 192.168.1.139 NA NA
3 192.168.1.134 00000000-0000-0000-0000-000000000000 localhost1.
4 192.168.1.136 00000000-0000-0000-0000-000000000000 localhost2.
5 192.168.1.137 423416ec-385c-3be9-c26c-2f01b6b48ca7 localhost3.
6 192.168.1.134 42343a8f-65b9-e0ae-acf2-b6d4e3995147 192.168.1.134
We tried to remove the VEM module manually:
switch# conf t
Enter configuration commands, one per line. End with CNTL/Z.
switch(config)# no vem 3
ERROR: module 3 is inserted, cannot remove
But it failed. So we ran the following:
switch# system switchover
That failed over to the other HA standby VSM (if you have it setup) and then the stale module was gone. We did the same thing for the other host that had all zeros for it’s UUID and it worked just fine.
One last note, don’t confuse this UUID for the “System UUID”, they are two completely different UUIDs. To find out the “System UUID” you can do the following:
~ # grep uuid /etc/vmware/esx.conf
/system/uuid = "4ff35a91-ab62-fc60-8199-0050561721df"
~ # esxcfg-info -y | grep "System UUID"
|----System UUID.................................................4ff35a91-ab62-fc60-8199-0050561721df
VMware KB 1024791 talk about it’s uses. Here are a couple:
  1. For locking files
  2. Generating Mac Addresses for management interfaces
And I am sure there are lot of other VMware functions that rely on that value, but again it’s different from the “BIOS UUID”, which the Cisco VEM depends on. Here are both values seen from the same host:

~ # esxcfg-info -a | grep -E 'BIOS UUID|System UUID'
|----BIOS UUID......0x42 0x34 0x3a 0x8f 0x65 0xb9 0xe0 0xae 0xac 0xf2 0xb6 0xd4 0xe3 0x99 0x51 0x47
|----System UUID....4ff35a91-ab62-fc60-8199-0050561721df

What is UCS,VSM and VEM

A unified computing system (UCS) is is a converged data center architecture that integrates computing, networking and storage resources to  increase efficiency and enable centralized management.


UCS products are designed and configured to work together effectively.The goal of a UCS product line is to simplify the number of devices that need to be connected, configured, cooled and secured and provide administrators with the ability to manage everything through a single graphical interface.
The term unified computing system is often associated with Cisco.

 Cisco UCS products have the ability to support traditional operating sytem (OS) and application stacks in physical environments, but are optimized for virtualized environments.

Everything is managed through Cisco UCS Manager, a software application that allows administrators to provision the server, storage and network resources all at once from a single pane of glass.

Similar offerings to Cisco UCS include HP BladeSystem Matrix, Liquid Computing's LiquidIQ, Sun Modular Datacenter and InteliCloud 360.


Cisco Nexus 1000V manages a data center defined by a VirtualCenter. Each server in the data center is represented as a module and can be managed as if it were a module in a physical Cisco switch.
The Cisco Nexus 1000V implementation has 2 parts:
Virtual supervisor module (VSM) - This is the control software of the Cisco Nexus 1000V distributed virtual switch. It runs on a virtual machine (VM) and is based on Cisco NX-OS software.
Virtual Ethernet module (VEM) - This is the part of Cisco Nexus 1000V that actually switches data traffic. It runs on a VMware ESX 4.0 host. Several VEMs are controlled by one VSM. All the VEMs that form a switch domain should be in the same virtual Data Center as defined by VMware VirtualCenter.