Personally I’m a big fan of using NFS (over 10GbE of course) storage with vSphere, I believe the performance gain that FC solutions offer is not worth the burden of dealing with block storage.
In the real world however I work a lot with FC and iSCSI storage of course, so I’d like to share my experience on how to recover damaged VMFS5 (so vSphere 5.0 and above) partition using partedUtil tool,
There is official KB on that matter which you can find here  and a few really good blog posts also, but I will try to put the “academic” knowledge from KB into some real-life context.
So let’s say that (to simplify) you have a 3 node vSphere DRS/HA cluster and you need to perform some maintenance (hypervisor patching or hardware break-fix) that require one host to be rebooted. You of course put the host in “maintenance mode”, do your job then reboot the box. Once it is back online again – you notice that one of your VMFS5 datastores is missing from the host’s datastore list  (actually first time I had it – I didn’t noticed it at all, just started to wonder why on earth virtual machines are not migrating back to the repaired host after it exited “maintenance mode”). If you check two remaining hosts – everything works fine there, datastore is perfectly accessible, vms are running fine, no issues whatsoever. When you dig a little more deeper into problematic host configuration you notice that block devices (storage adapters -> hba -> devices) that hold the datastore are visible, but when you try to “add datastore” with the wizard (no matter C# or Web client) it says the partition is empty and suggest you to format it (which of course you don’t want to do – you’ve got vms running there, right?).
It turns out that above are the most common symptoms of corruption in VMFS5 partition table: you know FC fabric is working fine – cause you didn’t change anything there, moreover you can see the block devices and actual vms located on this datastore are working on hosts that you didn’t change, it is only recently rebooted host that gives you trouble. This is because VMFS5 partition table is only being read upon datastore mount (so after reboot or HBA rescan) and from that moment on vSphere hosts operate on in-memory data structures (which means that VMFS5 partition corruption might go unnoticed for long time and makes root cause difficult to find).
One way to deal with this is to create completely new datastore and storage vmotion machines from corrupted one before things get worse (and they will get worse), but if you don’t have enough free storage at hand or want a quicker solution – partedUtil is the tool that might save the day (and it will never make things worse)
partedUtil is a Vmware’s program to manipulate GPT-style partitions, it operates on devices directly, so before using it we need to find device name for affected datastore.
You can of course find it with GUI, but as I pretend to be a real pr0, here is esxcli command (that of course you need to run on host that is not affected)
esxcli storage vmfs extent list
# esxcli storage vmfs extent list
Volume Name VMFS UUID Extent Number Device Name Partition
- - -
datastore01 52907843-3696d870-24d6-d8d385be26a8 0 eui.001738000a0b0178 1
datastore02 528f7fb5-797e00fe-ce22-d8d385be26a8 0 eui.001738000a0b01e4 1
datastore03 52cec501-b14f43c3-966e-6c3be5b7d600 0 eui.00173800ee97011c 1
local-datastore 518ce7c0-c7907627-562e-d8d385be26a8 0 naa.600508b1001030374243394337301000 3
# esxcli storage vmfs extent list
Volume Name   VMFS UUID                             Extent Number   DeviceName                           Partition
                          -          -                            -
datastore01        52907843 - 3696d870 - 24d6 - d8d385be26a8                0    eui .001738000a0b0178                            1
datastore02        528f7fb5 - 797e00fe - ce22 - d8d385be26a8                0    eui .001738000a0b01e4                            1
datastore03        52cec501 - b14f43c3 - 966e - 6c3be5b7d600                0    eui .00173800ee97011c                            1
local - datastore    518ce7c0 - c7907627 - 562e - d8d385be26a8                0    naa .600508b1001030374243394337301000            3 |
Let’s assume datastore01 is affected, so the device name we are looking for is eui.001738000a0b0178. Having that, we ssh to affected host and go directly to /vmfs/devices/disks/  directory.
First thing to check if of course status of GPT partion table
partedUtil getptbl
# partedUtil getptbl /vmfs/devices/disks/eui.001738000a0b0178
Error: The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Fix primary table ? diskPath (/dev/disks/eui.001738000a0b0178) diskSize (4294967295)
AlternateLBA (1) LastUsableLBA (4294967261)
gpt
267349 255 63 4294967295
# partedUtil getptbl /vmfs/devices/disks/eui.001738000a0b0178
Error : The primary GPT table is corrupt , but the backup appears OK , so that will beused . Fix  primary table ? diskPath ( / dev / disks / eui . 001738000a0b0178 ) diskSize( 4294967295 ) AlternateLBA ( 1 ) LastUsableLBA ( 4294967261 )
gpt
267349 255 63 4294967295 |
The error message is quite encouraging, it actually suggests that something can be fixed!
partedUtil indeed has fix (even fix “interactively”!) options that would look something like that (respectively)
partedUtil fix /vmfs/devices/disks/eui.001738000a0b0178
or (interactively)
partedUtil fixGpt /vmfs/devices/disks/eui.001738000a0b0178
You should try them first, but honestly – none of these options has ever worked for me .
Luckily even with corrupted partition table we’ve got enough information to re-create VMFS5 partition in place.
Please note that I am assuming you are using your whole block device (LUN) for VMFS5 partition, things get a bit more complicated if you created more than one VMFS5 on a single LUN (but what would you do that? moreover, from my experiments I’ve learned that only 1st VMFS5 partition is visible in such configurations).
Information we have is
gpt – partition type
267349 – number of “disk cylinders” as presented by storage array – (let’s call it  C )
255 – number of “disk heads” - (let’s call it H )
63 – number of “disk sectors per track” - (let’s call it S )
4294967295 – total number of sectors (512 byte) available on LUN
Now using formula that is known since MS-DOS age we can determine the number of last sector of partition that should occupy whole LUN, this number is
(C*H*S)-1 = (267349*255*63)-1 =  4294961684 
Of course we all know our partition number will be 1  (and there can be only one ) ), default offset for newly created VMFS5 partition is  2048  and cryptic “partition GUID” for VMFS5 is  AA31E02A400F11DB9590000C2911D1B8
We can feed this information into partedUtil now (and create new partition table that we hope will contatin existing VMFS5 filesystem)
partedUtil setptbl
# partedUtil setptbl /vmfs/devices/disks/eui.001738000a0b0178 gpt "1 2048 4294961684 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
267349 255 63 4294967295
1 2048 4294961684 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
# vmkfstools -V
# partedUtil setptbl /vmfs/devices/disks/eui.001738000a0b0178 gpt "1 2048 4294961684 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
267349 255 63 4294967295
1 2048 4294961684 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
# vmkfstools -V |
The trailing zero provided with setptbl option means that partition we are creating is not bootable.
If sucessful partedUtil returns information about partition “geometry” similar to what we saw with getptbl option, then  vmkfstools -V  should do the trick of scanning for and mounting datastore that we lost…
But…
Depending on many factors (like what is your storage backend, but also how partition was created, don’t ask me about the details ) partition last sector calculated with  C*H*S 
formula might be incorrect and subsequently the VMFS5 filesystem that we are looking for is not completely contained within partition we’ve just created (and of course it will not mount)
In that case you might try another approach – retrieve information about how many sectors from LUN are usable, and set the partition to that (maximum) size
The sequence of commands would look like this
partedUtil - getusable sectors
# partedUtil getUsableSectors /vmfs/devices/disks/eui.001738000a0b0178
34 4294967261
# partedUtil setptbl /vmfs/devices/disks/eui.001738000a0b0178 gpt "1 2048 4294967261 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
267349 255 63 4294967295
1 2048 4294967261 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
# vmkfstools -V
# partedUtil getUsableSectors /vmfs/devices/disks/eui.001738000a0b0178
34 4294967261
# partedUtil setptbl /vmfs/devices/disks/eui.001738000a0b0178 gpt "1 2048 4294967261 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
267349 255 63 4294967295
1 2048 4294967261 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
# vmkfstools -V |
getUsableSectors (please note that both  partedUtil and all options to this program are case-sensitive) returns two numbers as explained below:
34 – number of first sector available for partitions on given LUN (earlier sectors are reserved by SAN array, we don’t really care as long as this number is smaller than default VMFS5 offset of 2048).
4294967261 – number of last sector available for partitions (typically higher that the one we retrieve from  C*H*S  formula).
Then we just feed the latter number to partedUtil setptbl  command and… that’s it!
Yeah OK, the original KB 204661  mentions “third possibility” when your datastore still doesn’t mount and you can find entries in host’s  vmkernel.log   file that affected device has expaned, there is also formula to calculate end of VMFS5 partition for that case, but I never had to use this solution so far – I was always able to mount troublesome datastore at most after 2nd attempt – creating partition using “usable sectors” number.
What is important to remember – manipulating GPT partition table with partedUtil should not affect other hosts in the cluster (unless you reboot them or rescan their HBAs) and let’s face it – you already have corrupted datastore -  partedUtil  will not break it any more, if none of the solutions I described here works for you – you still can attach brand new datastore to your cluster and evacuate (storage vmotion) your vms from corrupted one.
I hope you find this post useful – any comments are as always welcome!