You are hereVMware ESXi 4.0 Update 1 Installation On A Big Array
VMware ESXi 4.0 Update 1 Installation On A Big Array
I ran into a problem figuring out how to set up a large, multiple terrabyte disk array that was usable by VMware ESXi.
Specifically, I'm setting up a server with 16GB of memory and a 4-core 3ghz processor to run VMware ESXi 4.0 (Update 1). This driveless Dell rack server is hooked up to an external Dell MD1200 array packed with 7.2TB of storage. I wanted to make the entire 7.2TB (less after accounting for RAID5) available to my virtual machines.
The MD1200 has the option to split into two drive arrays and be accessed by two systems. For this configuration, I'm just going to have the one server, so I set up redundant paths (2 cables) from the server to the first controller on the array.
A few of the things you need to know about installing ESXi...
- It doesn't like drives that are larger than 2TB. In fact, if you create a single RAID5 cluster on a 7.2TB unit like mine, it will take the first 2TB and ignore the rest!
- The installer is pretty aggressive in acquiring and using drives. It will, in fact, eat up all your drives and allocate them as separate vm file systems. If you have a large array and you want to actually set it up the way you want (and it isn't your only drive like in my case), you won't even want that array connected to your system when you install!
- The largest file (and thus virtual machine) you can have on your vmfs file system, using the default settings, is only 256GB. Don't worry though, you can override this later manually.
My problem was that I did what was logical. I allocated my entire RAID5 array in my controller (except for a single hotspare I reserved) and ended up with a nice 5TB array.
That didn't work, though, because ESXi only likes 2TB - 512MB sized vmfs disks.
My problem was that I kept thinking of my array as an accessible large array and, because of SCSI spec limitations, VMware wants vmfs3 disks to be less than that.
I was a little irritated as I thought about how I was going to have to work around this. However, as I was reading through the manual (always a good idea to do AFTER the installation :-) I figured out I just needed to use extents.
Extents allow you to join up separate disks (real or imagined) into a single volume that is then accessible by your virtual machines.
So, in my case, I just wiped the configuration off my array controller and started over.
First, I allocated 2 drives as array-wide hotspares.
Next, I created a RAID1 volume using 2 drives (2 x 600gb = 600gb available) for the primary virtual array. This gives a place for ESXi to live and gives me the start of my new volume.
The remaining drives were split into two RAID5 drives with four 600GB drives each, which works out to be about 1.6TB per unit. This keeps me below the nasty 2TB - 512MB limit and gives me two big virtual volumes.
I installed ESXi into the first 600GB volume and it automatically created two 1.6TB datastores.
This isn't what I wanted, so once I got the remote utilities working to control things, I deleted those (I didn't have any data on them, yet, so that was fine--you won't want to just "try" this unless you are sure of what you are doing).
I could now see 2 empty volumes.
I went into the disk control screen for the system and opened the 600GB disk. In the little window in the left, you can right click and add an extent. This I did, pointing at the first empty 1.6TB array volume, then added the entire volume.
I repeated that for the second 1.6TB.
I was then happily looking at a total capacity of almost 4TB of storage. 553GB + 1.64TB + 1.64TB + 2 hotspares.
I now have a nice, tidy, very large array ready to be filled with buzzing little virtual machines.
Did this help you? You can help me!
Did you find this information helpful? You can help me back by linking to this page, purchasing from my sponsors, or posting a comment!
+One me on Google:
Follow me on twitter: http://twitter.com/mojocode