Thursday, October 18, 2012

Best Practices around using RDMs in vSphere!

One of VMware's partner engineer raised this query on an internal group. He wanted to understand and learn the best practices or the Do's and the Don'ts while using RDM (Raw Device Mappings) Luns in a vSphere environment.

I hope you being a reader understand what an RDM is and what role does it play in a vSphere Environment. In case, you are not aware of RDM, then kindly refer to the following document - vSphere 5.x Storage Guide and read about Rae Device Mapping (RDM).

During this discussion we will consider the following requirements which we need to meet :-
  • We need to provision RDM's for more than 20 VM's and size of disks will vary from 1TB to 19TB.
  • The RDM is decided for configuring MSCS on VMs like MS Exchange, MSSQL, File Servers etc.

A usual topic of discussion is choosing between RDM and VMDK. Since we have already solved that mystery, there is not much to worry about. We are already following the best practices around application layer by choosing RDM’s instead of VMDK. Now since we are playing with Luns mapped to your virtual machines, there are a few things we should take care of:-
  1. Choosing between Physical Compatibility Mode & Virtual Compatibility mode for RDM – A physical RDM is more storage array driven and virtual machine controlled. The VMkernel has limited or no role to play and it literally becomes a postman who delivers IOs from the OS to the LUN (just like an OS running on a physical server saving data on a storage LUN). This will restrict you from using VM level snapshots and other file locking technologies of VMKernel. Since you are talking about file sizes of more than 2TB, please ensure you are on VMFS 5 and use Physical compatibility mode only as VMFS 5 does not support RDM with Virtual compatibility mode for Luns greater than 2TB – 512bytes. On the other hand you would be able to use RDM with Physical compatibility mode for up to 64 TB. (VMFS 5 required)
  2. Ensure you have the correct Multipathing and failover settings
  3. Please ensure you are zoned appropriately. Test heavily before pushing things into production
  4. Follow the MSCS on vSphere guide without fail to avoid any last minutes surprises



Well this should help you do the right things with RDM's. Ensure you go through the document which I mentioned before and you should be good to go.


6 comments:

  1. Hi Sunny,

    Quick question - you say that "VMFS 5 does not support RDM with Virtual compatibility mode for Luns greater than 2TB" and I see the same limitation in VMware's document entitled "Configuration Maximums
    - VMware® vSphere 5.1". Could you help me better understand what this means in practice?

    In our case, we have two disk arrays configured in a RAID 6 array comprising multiple TBs. If we were to take the RDM approach, my understanding is that we would separate the array into two partitions:

    1) A VMFS partition to host the guest OS .vmdk's as well as the RDM mapping .vmdk's

    2) A raw partition

    Assuming partition 2 is greater than 2TB, could we not use it as a virtual RDM?

    Thanks :)

    ReplyDelete
  2. @Unknown

    RDMs are dedicated for a single Virtual Machine and VMFS Luns are dedicated for multiple VMs running on one or more ESXI servers. (hope that answers the questions :-) )

    In simpler words, vSphere Supports 2TB VMDK sizes for any Guest OS, hence the RDM mapping file cannot be more than 2 TB.

    However you can have a LUN with 64 TB size and format the same as VMFS with vSphere 5 and 5.1 and run multiple virtual machines on it.

    ReplyDelete
    Replies
    1. Sunny,

      Thanks for elaborating! I understand the distinction you are drawing between RDMs and VMFS LUNs, and I have further learned that we can simply take our second (raw) partition and further partition in into 2TB chunks, which we can then use as virtual RDMs.

      That being said, I do have one follow-up question. As is written here (http://searchvmware.techtarget.com/tip/Understanding-the-files-that-make-up-a-VMware-virtual-machine):

      "The metadata in the mapping file includes the location of the mapped device (name resolution) and the locking state of the mapped device. If you do a directory listing you will see that these files will appear to take up the same amount of disk space on the VMFS volume as the actual size of the LUN that it is mapped to, but in reality they just appear that way and their size is very small. One of these files is created for each RDM that is created on a VM."

      If the mapping file only appears large, but is in fact small, why does virtual RDM not support LUNs greater than 2TB, but physical RDM does?

      Thanks for your help :)

      Delete
  3. That's a great question.

    However the answer lies in my earlier response. In case of a Physical RDM, we by-pass the vmkernel layer for the lun and directly mount it in the Virtual machine guest OS, hence the size of the Physical RDM is limited by the Guest OS capabilities. VMKernel not even knows whats happening on that LUN.

    However, in case of Virtual RDM, VMkernel sits right between the VM and LUN. The Disk I/O requests pass through the VMkernel layer and vmkernel cannot understand a vmdk which is more that 2TB -512 bytes.

    Yes vmdk is a descriptor and flat-vmdk is the data file, but the descriptor defines the data file size which cannot be more than 2 TB if VMkernel has to read it.

    Hope that helps.

    :-) Sunny

    ReplyDelete
    Replies
    1. That does help, and indeed, makes good sense. Thank you for explaining!

      Interestingly, when I look at our datastore, I don't see any flat-VMDK files (see here: http://img209.imageshack.us/img209/2708/70095568.png). Has the file naming architecture perhaps changed in vSphere 5.1?

      Delete
    2. flat-vmdk are hidden in the GUI for security reason. If you SSH to the ESXi server and then to to

      cd /vmfs/volumes/datastore name/vm folder name/

      and run the command

      ls -ltrh

      you will see all the files for the vmdk. For each vmdk, you will see a corresponding flat.vmdk.

      It has been this way since the first release of VMware's VMFS file system.

      Delete