ESXi 7.x and beyond , SD cards & system storage layout

I am using VMWare ESXi since version 4 was shipped in 2009 in both, home lab and enterprise scale installations. The product evolved over the years, and it worked mostly reliable; except a couple of minor glitches in the matrix. Best of all. It is FREE.

If you are running a home lab, resources are limited. there is a 50% chance that you are running the ESXi from a USB flash drive or SD card.

Although it is and was never recommended by VMWare it works. I saw a few USB sticks dying over the years. When that happened for the very first time, I panicked. Today, I know better.

Simply reinstall ESXi on a new device, boot the server, apply your license and register your VMs. Takes 5 minutes, and you are back on track.

ESXi 7.x is the current release, and VSphere.next ( version 8 ) will come out soon. There are already a lot of articles around where VMware announced not to support USB / SD devices as a boot device. But they revised guidance in VMWare KB 85685

VMware will continue supporting USB/SD card as a boot device through the 8.0 product release, including the update releases. Both installs and upgrades will be supported on USB/SD cards.

I encourage you to read VMware KB 85685 in its entirety. The information is critical to ANY virtualization home lab enthusiast, and/or any VMware administrator who is planning out how and where they’ll be installing or upgrading to VMware ESXi 8.0 hypervisor.

Here is an image that shows the changes in the system storage layout in ESXi 6.x / 7.x. There will be no changes in ESXi 8.0

There is one important change, and that is the consolidation of some partitions in the ESXi 6.x layout into the new ESX-OS Data partition in ESXi 7.0. While the partitions in ESXi 6.x were static, the ESX-OS Data partition can vary in size depending on the size of the boot device.

If you want to use an USB / SD card, it should have a minimum size of 32 GB. I recommend using a high or max endurance microSD card like the SanDisk MAX ENDURANCE microSD-Karte 32GB.

I have used it since ESXi 7, but I decided to reconfigure my home lab and use a persistent 500GB SSD drive as boot device.

And here is where ESX-OS Data dynamic partition size comes into play. The partition can have a maximum size of 138GB.
On a 32 GB drive, ESX-OS Data will be 25 GB, and 55 GB on drives with less or equal 128 GB.

In a home lab there is no need for a 138 GB ESX-OS Data partition. As you can see from this image, the partition size will be pre-allocated, but it does not contain much data.

ESXi 7.0 Update 1c  release adds the boot option systemMediaSize to customize the space used by system-storage during installation and better match the purpose and size of the server. (VMWare KB 81166)

I first tried to edit boot.cfg as described in the KB article, but for some unknown reason this did not work. I recommend to enter the parameter interactive during the installation.

Start the host with the install image and when the ESXi installer window appears, press Shift+O within 5 seconds to edit the boot options.

In the lower left corner you will see something like

runweasel cdromboot

Replace cdromboot with

systemMediaSize=small

for a 55 GB ESX-OS Data partition.

After the installation has finished, you can SSH into the ESXi.

Type ls -ltrh /vmfs/devices/disks/ to get information about all your disks and partitions.

401.8G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:8  // datastore
 55.9G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:7  // ESX-OS Data
  4.0G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:6  // boot-bank 1
  4.0G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:5  // boot-bank 0
100.0M Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB:1  // system boot
465.8G Aug 15 04:37 t10.NVMe__Samsung_SSD_970_EVO_500GB    //  

If your persistent SSD drive already contains a VMFS datastore, you must unregister the existing VMs and move them off of the datastore. the ESXi installer needs to repartition the device and this will also delete an existing VMFS datastore.

Although SD cards will still be supported in newer versions of the ESXi with options to move the ESX-OS Data partition off of the SD card to a VMFS datastore, you should consider to put the boot partition(s) on a persistent SSD drive.

The week couldn’t start worse when to come into work on a Monday and find out that one or more of them had self-destructed.


ESXI 6.7 update: No space left on device

I have tried to update my VMware ESXi 6.7 host to ESXi-6.7.0-20190802001-standard (Build 14320388) today. On the host’s SSH console, the command to use is:

esxcli software profile update -p ESXi-6.7.0-20190802001-standard -d  https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml 

Unfortunately, this gave me this error:

[InstallationError]
  [Errno 28] No space left on device
        vibs = VMware_locker_tools-light_10.3.10.12406962-14141615 
  Please refer to the log file for more details.

/var/log/esxcli.log only gives that exact message! Still, it should be fairly obvious a problem, there is no disk space left.
Just, there is. The commonly accepted fix for this problem is to enable using your datastore as swap space:

  • Logon to web ui
  • Go to Host, Manage, System
  • Select the Swap entry and change it to be enabled
  • Pick a datastore of your choice, and enable Host cache and Local swap

I wrote about this here https://www.eknori.de/2018-03-18/vmware-esxi-errno-28-no-space-left-on-device-ibmchampion/

Unfortunately, in this situation, host swap already was enabled.

There is though, a workaround. You can use an image that doesn’t have the tools vib included with this command:

esxcli software profile update -p ESXi-6.7.0-20190802001-no-tools -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot
 -index.xml

You can then manually install the troublesome vib (if you have a need for tools) with this command:

esxcli software vib install -v https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.10.12406962-14141615.vib

ESXi 6.7 Update Failed

I just tried to update my ESXi 6.7 host from ESXi-6.7.0-20181002001-standard (Build 10302608) to ESXi-6.7.0-20181104001-standard (Build 10764712).

Find a list for all available updates and patches here.

The update process in general is straight forward. SSH into your ESXi host and execute the following commands.

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.7.0-20181104001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
esxcli network firewall ruleset set -e false -r httpClient

(replace the highlighted version with the version you want to upgrade to )

This time, the update failed.

[InstallationError]
Failed to setup upgrade using esx-update VIB: (None, "Failed to mount tardisk /tmp/esx-update-2123405/esxupdt-2123405 in ramdisk esx-update-2123405: [Errno 1] Operation not permitted: '/tardisks.noauto/esxupdt-2123405'")
vibs = ['VMware_bootbank_esx-update_6.7.0-1.31.10764712']
Please refer to the log file for more details.
[root@esxi:~]

Digging into the logs, I found the following clues

vmkernel.log:
cpu1:2099635)ALERT: VisorFSTar: 1655: Unauthorized attempt to mount a tardisk
cpu1:2099635)VisorFSTar: 2062: Access denied by vmkernel access control policy prevented creating tardisk

esxupdate.log:
esxupdate: 2099635: root: ERROR: File "/build/mts/release/bora-10302608/bora/build/esx/release/vmvisor/sys-boot/lib64/python3.5/shutil.py", line 544, in move
esxupdate: 2099635: root: ERROR: PermissionError: [Errno 1] Operation not permitted: '/tmp/esx-update-2123405 /esxupdt-2123405' -> '/tardisks.noauto/esxupdt-2123405 '

By the way, the esxupdate.log is in HEX format for some reason.

You can either use a HEX-Editor to decode the file, or open it in Visual Studio Code.

You’ll get a warning; just click on “Do you want to open it anyway?

After a couple of try and error, I was able to get the updated VIBs using

[root@esxi:~] esxcli software vib update --depot=https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_esx-base_6.7.0-1.31.10764712, VMware_bootbank_esx-ui_1.31.0-10201673, VMware_bootbank_esx-update_6.7.0-1.31.10764712, VMware_bootbank_sata-ahci_3.0-28vmw.600.3.107.10474991, VMware_bootbank_vsan_6.7.0-1.31.10720746, VMware_bootbank_vsanhealth_6.7.0-1.31.10720754
VIBs Removed: VMW_bootbank_sata-ahci_3.0-26vmw.670.0.0.8169922, VMware_bootbank_esx-base_6.7.0-1.28.10302608, VMware_bootbank_esx-ui_1.30.0-9946814, VMware_bootbank_esx-update_6.7.0-1.28.10302608, VMware_bootbank_vsan_6.7.0-1.28.10290435, VMware_bootbank_vsanhealth_6.7.0-1.28.10290721
VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.670.0.0.8169922, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.670.0.0.8169922, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-via_0.3.3-2vmw.670.0.0.8169922, VMW_bootbank_block-cciss_3.6.14-10vmw.670.0.0.8169922, VMW_bootbank_bnxtnet_20.6.101.7-11vmw.670.0.0.8169922, VMW_bootbank_bnxtroce_20.6.101.0-20vmw.670.1.28.10302608, VMW_bootbank_brcmfcoe_11.4.1078.5-11vmw.670.1.28.10302608, VMW_bootbank_char-random_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.670.0.0.8169922, VMW_bootbank_elxiscsi_11.4.1174.0-2vmw.670.0.0.8169922, VMW_bootbank_elxnet_11.4.1095.0-5vmw.670.1.28.10302608, VMW_bootbank_hid-hid_1.0-3vmw.670.0.0.8169922, VMW_bootbank_i40en_1.3.1-22vmw.670.1.28.10302608, VMW_bootbank_iavmd_1.2.0.1011-2vmw.670.0.0.8169922, VMW_bootbank_igbn_0.1.0.0-15vmw.670.0.0.8169922, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-devintf_39.1-5vmw.670.1.28.10302608, VMW_bootbank_ipmi-ipmi-msghandler_39.1-5vmw.670.1.28.10302608, VMW_bootbank_ipmi-ipmi-si-drv_39.1-5vmw.670.1.28.10302608, VMW_bootbank_iser_1.0.0.0-1vmw.670.1.28.10302608, VMW_bootbank_ixgben_1.4.1-16vmw.670.1.28.10302608, VMW_bootbank_lpfc_11.4.33.3-11vmw.670.1.28.10302608, VMW_bootbank_lpnic_11.4.59.0-1vmw.670.0.0.8169922, VMW_bootbank_lsi-mr3_7.702.13.00-5vmw.670.1.28.10302608, VMW_bootbank_lsi-msgpt2_20.00.04.00-5vmw.670.1.28.10302608, VMW_bootbank_lsi-msgpt35_03.00.01.00-12vmw.670.1.28.10302608, VMW_bootbank_lsi-msgpt3_16.00.01.00-3vmw.670.1.28.10302608, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.670.0.0.8169922, VMW_bootbank_misc-drivers_6.7.0-0.0.8169922, VMW_bootbank_mtip32xx-native_3.9.8-1vmw.670.1.28.10302608, VMW_bootbank_ne1000_0.8.4-1vmw.670.1.28.10302608, VMW_bootbank_nenic_1.0.21.0-1vmw.670.1.28.10302608, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.670.0.0.8169922, VMW_bootbank_net-bnx2x_1.78.80.v60.12-2vmw.670.0.0.8169922, VMW_bootbank_net-cdc-ether_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.670.0.0.8169922, VMW_bootbank_net-e1000_8.0.3.1-5vmw.670.0.0.8169922, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.670.0.0.8169922, VMW_bootbank_net-enic_2.1.2.38-2vmw.670.0.0.8169922, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.670.0.0.8169922, VMW_bootbank_net-forcedeth_0.61-2vmw.670.0.0.8169922, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.670.0.0.8169922, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.670.0.0.8169922, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-nx-nic_5.0.621-5vmw.670.0.0.8169922, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922, VMW_bootbank_net-usbnet_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.0.0.8169922, VMW_bootbank_nfnic_4.0.0.14-0vmw.670.1.28.10302608, VMW_bootbank_nhpsa_2.0.22-3vmw.670.1.28.10302608, VMW_bootbank_nmlx4-core_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-en_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-rdma_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx5-core_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx5-rdma_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_ntg3_4.1.3.2-1vmw.670.1.28.10302608, VMW_bootbank_nvme_1.2.2.17-1vmw.670.1.28.10302608, VMW_bootbank_nvmxnet3-ens_2.0.0.21-1vmw.670.0.0.8169922, VMW_bootbank_nvmxnet3_2.0.0.29-1vmw.670.1.28.10302608, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_pvscsi_0.1-2vmw.670.0.0.8169922, VMW_bootbank_qcnic_1.0.2.0.4-1vmw.670.0.0.8169922, VMW_bootbank_qedentv_2.0.6.4-10vmw.670.1.28.10302608, VMW_bootbank_qfle3_1.0.50.11-9vmw.670.0.0.8169922, VMW_bootbank_qfle3f_1.0.25.0.2-14vmw.670.0.0.8169922, VMW_bootbank_qfle3i_1.0.2.3.9-3vmw.670.0.0.8169922, VMW_bootbank_qflge_1.1.0.11-1vmw.670.0.0.8169922, VMW_bootbank_sata-ata-piix_2.12-10vmw.670.0.0.8169922, VMW_bootbank_sata-sata-nv_3.5-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-promise_2.12-3vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil24_1.1-1vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil_2.3-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-svw_2.3-3vmw.670.0.0.8169922, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.670.0.0.8169922, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.670.0.0.8169922, VMW_bootbank_scsi-aic79xx_3.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.670.0.0.8169922, VMW_bootbank_scsi-hpsa_6.0.0.84-3vmw.670.0.0.8169922, VMW_bootbank_scsi-ips_7.12.05-4vmw.670.0.0.8169922, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.670.0.0.8169922, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.670.0.0.8169922, VMW_bootbank_scsi-mpt2sas_19.00.00.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.670.0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-3-0_6.7.0-0.0.8169922, VMW_bootbank_smartpqi_1.0.1.553-12vmw.670.1.28.10302608, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usbcore-usb_1.0-3vmw.670.0.0.8169922, VMW_bootbank_vmkata_0.1-1vmw.670.0.0.8169922, VMW_bootbank_vmkfcoe_1.0.0.1-1vmw.670.1.28.10302608, VMW_bootbank_vmkplexer-vmkplexer_6.7.0-0.0.8169922, VMW_bootbank_vmkusb_0.1-1vmw.670.1.28.10302608, VMW_bootbank_vmw-ahci_1.2.3-1vmw.670.1.28.10302608, VMW_bootbank_xhci-xhci_1.0-3vmw.670.0.0.8169922, VMware_bootbank_cpu-microcode_6.7.0-1.28.10302608, VMware_bootbank_elx-esx-libelxima.so_11.4.1184.0-0.0.8169922, VMware_bootbank_esx-dvfilter-generic-fastpath_6.7.0-0.0.8169922, VMware_bootbank_esx-xserver_6.7.0-0.0.8169922, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-16vmw.670.1.28.10302608, VMware_bootbank_lsu-intel-vmd-plugin_1.0.0-2vmw.670.1.28.10302608, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-13vmw.670.1.28.10302608, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-8vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-9vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-7vmw.670.0.0.8169922, VMware_bootbank_lsu-smartpqi-plugin_1.0.0-3vmw.670.1.28.10302608, VMware_bootbank_native-misc-drivers_6.7.0-0.0.8169922, VMware_bootbank_qlnativefc_3.0.1.0-5vmw.670.0.0.8169922, VMware_bootbank_rste_2.0.2.0088-7vmw.670.0.0.8169922, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.34-1.28.10302608, VMware_locker_tools-light_10.3.2.9925305-10176879
[root@esxi:~]


[HomeLab] – Copy VM from one ESXi host to another

I recently decided that it is time to setup a new homelab. The old server is about 10 yrs old. The hardware does not allow any upgrade in CPU and Ram. VMWare ESXi was version 6.5, but I could not upgrade to version 6.7 because the network card was not in the list of supported NICs and so the upgrade failed. Last, but not least, the power consumption was at 200W.

The new HomeLab has the following components

It took less than 30 minutes to assemble the NUC and install VMWare ESXi 6.7. ( + 15 minutes to drive to the local hardware store to grab an USB keyboard once I realized that I would need one for the setup )

Today, I migrated the existing VMs from the old host to the new one.

ESXi does not include VMotion. VMotion costs a lot of money.
I had read some articles which claimed to be best practice. But to be honest, using Veeam or SCP are not, what I consider “best” practice. I tried SCP, but it was so slooooow. Even a 50GB VM was estimated 11 hours to copy. And I have 30 VMs from just a couple of MB to 100GB.

I searched for a better solution. And I finally found it. VMware vCenter Converter Standalone Client.

You simply choose the “source” ESXi instance and select the VM to copy. Next you select the “target” ESXi. You can also choose, if the copy will be automatically updated to the target VM version.

It took only 30 minutes to copy a 50GB VM. Another 100GB VM was copied in 20 minutes.

The whole migration was done in only 5 hours. Not bad, isn’t it.