I am using VMWare ESXi since version 4 was shipped in 2009 in both, home lab and enterprise scale installations. The product evolved over the years, and it worked mostly reliable; except a couple of minor glitches in the matrix. Best of all. It is FREE.
If you are running a home lab, resources are limited. there is a 50% chance that you are running the ESXi from a USB flash drive or SD card.
Although it is and was never recommended by VMWare it works. I saw a few USB sticks dying over the years. When that happened for the very first time, I panicked. Today, I know better.
Simply reinstall ESXi on a new device, boot the server, apply your license and register your VMs. Takes 5 minutes, and you are back on track.
ESXi 7.x is the current release, and VSphere.next ( version 8 ) will come out soon. There are already a lot of articles around where VMware announced not to support USB / SD devices as a boot device. But they revised guidance in VMWare KB 85685
VMware will continue supporting USB/SD card as a boot device through the 8.0 product release, including the update releases. Both installs and upgrades will be supported on USB/SD cards.
I encourage you to read VMware KB 85685 in its entirety. The information is critical to ANY virtualization home lab enthusiast, and/or any VMware administrator who is planning out how and where they’ll be installing or upgrading to VMware ESXi 8.0 hypervisor.
Here is an image that shows the changes in the system storage layout in ESXi 6.x / 7.x. There will be no changes in ESXi 8.0
There is one important change, and that is the consolidation of some partitions in the ESXi 6.x layout into the new ESX-OS Data partition in ESXi 7.0. While the partitions in ESXi 6.x were static, the ESX-OS Data partition can vary in size depending on the size of the boot device.
If you want to use an USB / SD card, it should have a minimum size of 32 GB. I recommend using a high or max endurance microSD card like the SanDisk MAX ENDURANCE microSD-Karte 32GB.
I have used it since ESXi 7, but I decided to reconfigure my home lab and use a persistent 500GB SSD drive as boot device.
And here is where ESX-OS Data dynamic partition size comes into play. The partition can have a maximum size of 138GB. On a 32 GB drive, ESX-OS Data will be 25 GB, and 55 GB on drives with less or equal 128 GB.
In a home lab there is no need for a 138 GB ESX-OS Data partition. As you can see from this image, the partition size will be pre-allocated, but it does not contain much data.
ESXi 7.0 Update 1c release adds the boot option systemMediaSize to customize the space used by system-storage during installation and better match the purpose and size of the server. (VMWare KB 81166)
I first tried to edit boot.cfg as described in the KB article, but for some unknown reason this did not work. I recommend to enter the parameter interactive during the installation.
Start the host with the install image and when the ESXi installer window appears, press Shift+O within 5 seconds to edit the boot options.
In the lower left corner you will see something like
runweasel cdromboot
Replace cdromboot with
systemMediaSize=small
for a 55 GB ESX-OS Data partition.
After the installation has finished, you can SSH into the ESXi.
Type ls -ltrh /vmfs/devices/disks/ to get information about all your disks and partitions.
401.8G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:8 // datastore
55.9G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:7 // ESX-OS Data
4.0G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:6 // boot-bank 1
4.0G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:5 // boot-bank 0
100.0M Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:1 // system boot
465.8G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB //
If your persistent SSD drive already contains a VMFS datastore, you must unregister the existing VMs and move them off of the datastore. the ESXi installer needs to repartition the device and this will also delete an existing VMFS datastore.
Although SD cards will still be supported in newer versions of the ESXi with options to move the ESX-OS Data partition off of the SD card to a VMFS datastore, you should consider to put the boot partition(s) on a persistent SSD drive.
The week couldn’t start worse when to come into work on a Monday and find out that one or more of them had self-destructed.
[InstallationError]
[Errno 28] No space left on device
vibs = VMware_locker_tools-light_10.3.10.12406962-14141615
Please refer to the log file for more details.
/var/log/esxcli.log only gives that exact message! Still, it should be fairly obvious a problem, there is no disk space left. Just, there is. The commonly accepted fix for this problem is to enable using your datastore as swap space:
Logon to web ui
Go to Host, Manage, System
Select the Swap entry and change it to be enabled
Pick a datastore of your choice, and enable Host cache and Local swap
(replace the highlighted version with the version you want to upgrade to )
This time, the update failed.
[InstallationError] Failed to setup upgrade using esx-update VIB: (None, "Failed to mount tardisk /tmp/esx-update-2123405/esxupdt-2123405 in ramdisk esx-update-2123405: [Errno 1] Operation not permitted: '/tardisks.noauto/esxupdt-2123405'") vibs = ['VMware_bootbank_esx-update_6.7.0-1.31.10764712'] Please refer to the log file for more details. [root@esxi:~]
Digging into the logs, I found the following clues
vmkernel.log: cpu1:2099635)ALERT: VisorFSTar: 1655: Unauthorized attempt to mount a tardisk cpu1:2099635)VisorFSTar: 2062: Access denied by vmkernel access control policy prevented creating tardisk
esxupdate.log: esxupdate: 2099635: root: ERROR: File "/build/mts/release/bora-10302608/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/shutil.py", line 544, in move esxupdate: 2099635: root: ERROR: PermissionError: [Errno 1] Operation not permitted: '/tmp/esx-update-2123405 /esxupdt-2123405' -> '/tardisks.noauto/esxupdt-2123405 '
By the way, the esxupdate.log is in HEX format for some reason.
I recently decided that it is time to setup a new homelab. The old server is about 10 yrs old. The hardware does not allow any upgrade in CPU and Ram. VMWare ESXi was version 6.5, but I could not upgrade to version 6.7 because the network card was not in the list of supported NICs and so the upgrade failed. Last, but not least, the power consumption was at 200W.
It took less than 30 minutes to assemble the NUC and install VMWare ESXi 6.7. ( + 15 minutes to drive to the local hardware store to grab an USB keyboard once I realized that I would need one for the setup )
Today, I migrated the existing VMs from the old host to the new one.
ESXi does not include VMotion. VMotion costs a lot of money. I had read some articles which claimed to be best practice. But to be honest, using Veeam or SCP are not, what I consider “best” practice. I tried SCP, but it was so slooooow. Even a 50GB VM was estimated 11 hours to copy. And I have 30 VMs from just a couple of MB to 100GB.
You simply choose the “source” ESXi instance and select the VM to copy. Next you select the “target” ESXi. You can also choose, if the copy will be automatically updated to the target VM version.
It took only 30 minutes to copy a 50GB VM. Another 100GB VM was copied in 20 minutes.
The whole migration was done in only 5 hours. Not bad, isn’t it.