Friday, 30 December 2016

My practise list

Practise list:

1.deploy esx
2.deploy window 2012 server on esx with adds and dns
3.integrate DNS server with ESX server  sunil@vmware.lab.com
4.deploy exhange DAG cluster node.
5.deploy linux clustering
6.deploy mysql and mongodb clustering
7.deploy SKY with others
8.deploy veeam backup with windows or linux server
9.deploy EMC networker with windows or linux
10.check how node works If one the node is down for all.
11.integrate linux dns sever with ESX server.
12.deploy Report manager with SKy.
13.deploy AGM
14.how to configure and use ILO
15.Configure DHCP from windows and linux servers.
16.How to use Free Nas
17.Install and configure ISCSI using Free Nas as storage lun
18.Nmap commands
19.TCPdump commands
20.ethtool commands
21.vmware deployment tool deploying also with scripting deployment
22.accessing windows machine on imm or ilo
23.deploying Zmanda  server
24.using VMWare from cli i.e vmshell
25.windows 2012 roles and how to configure them and use them.
26.installing Xen server
27.installing hyper v server
28.installing ovm
29.installing RHEVM
30.how to download supported hardware vmware image file.
31.How To Use Windows Server as a Network Router {https://redmondmag.com/articles/2015/04/23/windows-server-as-a-network-router.aspx}
32.installing esx 6 with vcenter 6
33.creating switches and bonding in esxi networking.
34.creating vm with RDM
35.understanding openfiler completely.
36.usage of netcat
37.p2v and v2v
38.creating standard and distributed switches
39.vmware view (horizon) vmware thinapp and vmware vcloud
40. DRS and HA
41. DRS and HA with FT
42. FT on a vm
43. clustering without DRS and HA
44.

wants:
AIX & solaris
clustering and load balancing in linux windows and vmware
shell scripting
python
CIFS NFS protocol
ISM certification
vmware certification
fabric administration
cloud
CEH
OSCP
CISSP




Read documents:
1.ISM document
2.vmware 5.5 and 6.0 document
3.linux administration documents
4.Actifio cli reference guide
5.


Look after for videos on
1.installing and configuring Fc switch
2.FC connections b/w severs and storage
3.how to install storage
4.Ethical hacking
5.penetration testing
6.Oscp
7.python scripting
8.windows scripting
9.linux scripting



refer concepts
1.ADDS
2.DNS
3.IIS
4.WDS
5.NPAS
6.Application server
7.linux server concepts >> Dhcp dns webserver cifs nfs squid clustering
8.linux ldap
9.

Thursday, 22 December 2016

Difference between mtime, ctime and atime



A common mistake is that ctime is the file creation time. This is not correct, it is the inode/file change time. mtime is the file modification time. A often heard question is "What is the ctime, mtime and atime?".This is confusing so let me explain the difference between ctime, mtime and atime.

ctime

ctime is the inode or file change time. The ctime gets updated when the file attributes are changed, like changing the owner, changing the permission or moving the file to an other filesystem but will also be updated when you modify a file.

mtime

mtime is the file modify time. The mtime gets updated when you modify a file. Whenever you update content of a file or save a file the mtime gets updated.

Most of the times ctime and mtime will be the same, unless only the file attributes are updated. In that case only the ctime gets updated.

atime

atime is the file access time. The atime gets updated when you open a file but also when a file is used for other operations like grep, sort, cat, head, tail and so on.

In the descriptions, wherever n is used as a primary argument, it shall be interpreted as a decimal integer optionally preceded by a plus ( '+' ) or minus-sign ( '-' ) sign, as follows:
+n More than n.
  n Exactly n.
-n Less than n.

At the given time (2014-09-01 00:53:44 -4:00, where I'm deducing that AST is Atlantic Standard Time, and therefore the time zone offset from UTC is -4:00 in ISO 8601 but +4:00 in ISO 9945 (POSIX), but it doesn't matter all that much):
1409547224 = 2014-09-01 00:53:44 -04:00
1409457540 = 2014-08-30 23:59:00 -04:00
so:
1409547224 - 1409457540 = 89684
89684 / 86400 = 1
Even if the 'seconds since the epoch' values are wrong, the relative values are correct (for some time zone somewhere in the world, they are correct).

The n value calculated for the 2014-08-30 log file therefore is exactly 1 (the calculation is done with integer arithmetic), and the +1 rejects it because it is strictly a > 1 comparison (and not >= 1).

"-mtime -2" means files that are less than 2 days old, such as a file that is 0 or 1 days old.

"-mtime +2" means files that are more than 2 days old... {3, 4, 5, ...}


The argument to -mtime is interpreted as the number of whole days in the age of the file. -mtime +n means strictly greater than, -mtime -n means strictly less than.
Note that with Bash, you can do the more intuitive:
$ find . -mmin +$((60*24))
$ find . -mmin -$((60*24))
to find files older and newer than 24 hours, respectively.

Thursday, 15 December 2016

what is vStorage APIs for Array Integration (VAAI) and how it is useful in thinprovisioning vms and retaining space by umap option in vmware

vStorage API for Array Integration (VAAI) is an application program interface (API) framework from VMware that enables certain storage tasks, such as thin provisioning, to be offloaded from the VMware server virtualization hardware to the storage array.


Offloading these tasks lessens the processing workload on the virtual server hardware. For a storage administrator to make use of VAAI, the manufacturer of his storage system must have built support for VAAI into the storage system.

Introduced in vSphere 4 with support for block-based (Fibre Channel or iSCSI) storage systems, VAAI consisted of a number of primitives, or parts.

 “Copy offload” enables the storage system to make full copies of data within the array, offloading that chore from the ESX server. “Write same offload” enables the storage system to zero out a large number of data blocks to speed the provisioning of virtual machines (VMs) and reduce I/O.

Hardware-assisted locking allows vCenter to offload SCSI commands from the ESX server to the storage system so the array can control the locking mechanism while the system does data updates.
In vSphere 5, vStorage APIs for Array Integration were enhanced. The most notable new functionality addresses thin provisioning of storage systems and expands support to network-attached storage (NAS) devices.

VAAI’s thin provisioning enhancements allow storage arrays that use thin provisioning to reclaim blocks of space when a  virtual disks is deleted, to mitigate the risk of a thinly provisioned storage array running out of space. Thin provisioned storage systems that support vSphere 5’s VAAI are given advance warnings when space thresholds are reached.

In addition, in that version, VAAI enables mechanisms to temporarily pause virtual machines when space runs out, giving admins time to add storage or migrate the virtual machine to a different array.
VAAI’s “hardware acceleration of NAS” component has two primitives, according to VMware.

Full file clone enables the NAS device to clone virtual disks to speed the process of creating virtual machines on NAS systems. “Reserve space” enables the creation of a thick virtual disk on a NAS device.

Qualified storage array vendors can partner with VMware to develop the firmware and plug-ins required by their arrays.

IBM SVC support with Unmap lun option for thin provisioning in VMware

IBM SVC definitely does NOT support SCSI unmap and we would not pass them through to back end disk if we got them.
Actually if SVC layer sees an unmap command it actually ignores it.
IBM SVC will do zero-detect where if a write contains nothing but zeros and the target disk is space efficient,  then the write will not be destined to disk.   Instead the B-Tree will be updated to say those grains are now free, but the grains are not released, meaning no space is reclaimed.   The only way to reclaim space is to do a VDisk copy (which is way too much work).

in-Band LUNs are image mode, so if VMware DID write zeros, they would be written to disk by SVC layer (rather than be dropped).
I don't know how 3PAR zero detect works, but maybe it releases space immediately (rather than just do b-tree update).

If so,   that would have worked OOB as well as IB.

Using the esxcli storage vmfs unmap command to reclaim VMFS deleted blocks on thin-provisioned LUNs (2057513)

This article provides steps to reclaim unused storage blocks on a VMFS datastore for a thin-provisioned device using the esxcli storage vmfs unmap command.
 
vSphere 5.5 introduced a new command in the esxcli namespace that allows deleted blocks to be reclaimed on thin provisioned LUNs that support the VAAI UNMAP primitive.

The command can be run without any maintenance window, and the reclaim mechanism has been enhanced as such:
  • Reclaim size can be specified in blocks instead of a percentage value to make it more intuitive to calculate.
  • Dead space is reclaimed in increments instead of all at once to avoid possible performance issues.
With the introduction of 62 TB VMDKs, UNMAP can now handle much larger dead space areas. However, UNMAP operations are still manual. This means Storage vMotion or Snapshot Consolidation tasks on VMFS do not automatically reclaim space on the array LUN.

Note: The vmkfstools -y command is deprecated in ESXi 5.5. For more information on reclaiming space in vSphere 5.0 and 5.1, see Using vmkfstools to reclaim VMFS deleted blocks on thin-provisioned LUNs (2014849).

Resolution

To reclaim unused storage blocks on a VMFS datastore for a thin-provisioned device, run this command:

esxcli storage vmfs unmap --volume-label=volume_label|--volume-uuid=volume_uuid --reclaim-unit=number
The command takes these options:
  • -l|--volume-label=volume_label

    The label of the VMFS volume to UNMAP. This is a mandatory argument. If you specify this argument, do not use -u|--volume-uuid=volume_uuid.

  • -u|--volume-uuid=volume_uuid

    The UUID of the VMFS volume to UNMAP. This is a mandatory argument. If you specify this argument, do not use -l|--volume-label=volume_label.

  • -n|--reclaim-unit=number

    The number of VMFS blocks to UNMAP per iteration. This is an optional argument. If it is not specified, the command uses a default value of 200.
For example, for a VMFS volume named MyDatastore with UUID of 509a9f1f-4ffb6678-f1db-001ec9ab780e, run this command:

esxcli storage vmfs unmap -l MyDatastore

or

esxcli storage vmfs unmap -u 509a9f1f-4ffb6678-f1db-001ec9ab780e
 
Notes:
  • The default value of 200 for the -n number or --reclaim-unit=number argument is appropriate in most environments, but some array vendors may suggest a larger or smaller value depending on how the array handles the SCSI UNMAP command.

  • Similar to the previous vmkfstools -y method, the esxcli storage vmfs unmap command creates temporary hidden files at the top level of the datastore, but with names using the .asyncUnmapFile pattern. By default, the space reservation for the temporary files depends on the block size of the underlying VMFS file system (the default is --reclaim-unit=200):

    • 200 MB for 1 MB block VMFS3 / VMFS5
    • 800 MB for 4 MB block VMFS3
    • 1600 MB for 8 MB block VMFS3

  • Depending on the use case, an administrator can select a different --reclaim-unit value, for example if the reserved size is considered to be too large or if there is a danger that the UNMAP primitive may not be completed in a timely manner when offloaded to an array. VMware recommends that vSphere administrators consult with their storage array providers on the best value or best practices when manually defining a --reclaim-unit value.

  • If the UNMAP operation is interrupted, a temporary file may be left on the root of a VMFS datastore. However, when you run the command against the datastore again, the file is deleted when the command completes successfully. The .asyncUnmapFile will never grow beyond the --reclaim-unit size.

  • The UNMAP operation may finish without doing anything or fail if the volume partition table and/or the block alignment is incorrect, due to upgrading a VMFS3 files system or repartitioning the volume with a 3rd party tool. For more information, see Thin Provisioning Block Space Reclamation (VAAI UNMAP) does not work (2048466).

  • If the UNMAP operation fails and you see an error about locked files or resource busy, see:

Understanding Storage Device Naming


Each storage device, or LUN, is identified by several names.
Depending on the type of storage, the ESXi host uses different algorithms and conventions to generate an identifier for each storage device.
SCSI INQUIRY identifiers.
The host uses the SCSI INQUIRY command to query a storage device and uses the resulting data, in particular information, to generate a unique identifier. Device identifiers that are unique across all hosts, persistent, and have one of the following formats:
naa.number
t10.number
eui.number
These formats follow the T10 committee standards. See the SCSI-3 documentation on the T10 committee Web site.
Path-based identifier.
When the device does not provide the Page 83 information, the host generates an mpx.path name, where path represents the path to the device, for example, mpx.vmhba1:C0:T1:L3. This identifier can be used in the same way as the SCSI INQUIRY identifies.
The mpx. identifier is created for local devices on the assumption that their path names are unique. However, this identifier is neither unique nor persistent and could change after every boot.
In addition to the SCSI INQUIRY or mpx. identifiers, for each device, ESXi generates an alternative legacy name. The identifier has the following format:
vml.number
The legacy identifier includes a series of digits that are unique to the device and can be derived in part from the Page 83 information, if it is available. For nonlocal devices that do not support Page 83 information, the vml. name is used as the only available unique identifier.
You can use the esxcli --server=server_name storage core device list command to display all device names in the vSphere CLI. The output is similar to the following example:
# esxcli --server=server_name storage core device list
naa.number
 Display Name: DGC Fibre Channel Disk(naa.number)
 ... 
 Other UIDs:vml.number
In the vSphere Client, you can see the device identifier and a runtime name. The runtime name is generated by the host and represents the name of the first path to the device. It is not a reliable identifier for the device, and is not persistent.
Typically, the path to the device has the following format:
vmhbaAdapter:CChannel:TTarget:LLUN
vmhbaAdapter is the name of the storage adapter. The name refers to the physical adapter on the host, not to the SCSI controller used by the virtual machines.
CChannel is the storage channel number.
Software iSCSI adapters and dependent hardware adapters use the channel number to show multiple paths to the same target.
TTarget is the target number. Target numbering is determined by the host and might change if the mappings of targets visible to the host change. Targets that are shared by different hosts might not have the same target number.
LLUN is the LUN number that shows the position of the LUN within the target. The LUN number is provided by the storage system. If a target has only one LUN, the LUN number is always zero (0).
For example, vmhba1:C0:T3:L1 represents LUN1 on target 3 accessed through the storage adapter vmhba1 and channel 0.

Determining if an array supports automated unmap in vSphere 6.5


Many of you will be aware of the new core storage features that were introduced in vSphere 6.5. If not, you can learn about them in this recently published white paper. Without doubt, the feature that has created the most amount of interest is automated unmap (finally, I hear you say!). Now a few readers have asked about the following comment in the automated unmap section. 
 
Automatic UNMAP is not supported on arrays with UNMAP granularity 
greater than 1MB. Auto UNMAP feature support is footnoted in the 
VMware Hardware Compatibility Guide (HCL).

So where do you find this info in the HCL? I’ll show you here.
First, navigate to the VMware HCL – Storage/SAN Section. Click here for a direct link. 

Next select ESXi 6.5, the vendor of the array, and in the features section, Thin Provisioning. Finally click on “Update and View Results”. This will provide a list of supported arrays and models. In this example, I selected DELL as a vendor. It produced a range of Compellent and EQL arrays. I choose Compellent Storage Center with Array Type FC, as shown below:

vcg-compellent-fcNow, if I expand on the 6.5 supported release, I can see the tested configurations, including the supported driver, firmware, multipathing and fail-over policies associated with this array config.
vcg-compellent-fc-configs 
If I expand any of those entries, I will see the footnote entries. In this example, let me select the FW version 7.1, with 16GB FC Switch and lpfc HBA.
vcg-fc-footnotes 
And there you can see the statement in the footnotes that this configuration support automatic storage space reclamation on VMFS6.
Let’s compare this to an iSCSI configuration using the same arrays. Here is the list of iSCSI Compellent configurations, once more detailing the supported driver, firmware, multipathing and fail-over policies associated with this array config.
vcg-compellent-configs 
Let’s select the 7.1 firmware version as before, and examine the footnotes:
vcg-iscsi-footnotes 
As you can clearly see, there is no mention of automatic unmap/space reclamation support in the footnotes in this example. Therefore unsupported.
Now this will continue to change, so check back regularly to see whether or not your array vendor has passed the certification tests required to support automated unmap. Alternatively, speak to your array vendor to find out where they are on the certification path.