Picked up a effydesk.ca business frame (black, frame only) and matched it up to with an IKEA GERTON tabletop that has following dimensions:
My main reason to go with the business over the home edition is due to a few advantages. The height adjustable desk frame can handle more weight, has better high/low settings, reinforced matte gloss painted square columns and wider feet. Plus the motors are quieter and they move faster.
I also like to support local businesses in Vancouver, BC and these guys were rock solid, near instant responses over email and they meet me at the warehouse so I can pick up the desk instead of shipping.
If you’re running on the free version of ESXi then ghettoVCB is a good, less resource-heavy, and free, alternative – perfect for a home lab!
SSH access to your ESXi host.
A datastore, preferably a remote NFS or iSCSI one, where backups can be stored.
Download the ghettoVCB VIB and then using your tool of choice (e.g. the vSphere Client or FileZilla over SFTP), transfer the VIB to your ESXi host.
Connect to your ESXi host over SSH and enable the installation of “community” packages using the command esxcli software acceptance set—level=CommunitySupported . By default, ESXi will only allow the installation of official packages, so unless the acceptance level is set to Community Supported, installation of ghettoVCB will fail.
Install the ghettoVCB VIB using the command esxcli software vib install–v/path/to/vghetto–ghettoVCB.vib–f . If you used the vSphere Client to upload it to a datastore, you can find it at /vmfs/volumes/DATASTORENAME/vghetto–ghettoVCB.vib.
You can verify that the installation was successful by navigating to /opt/ghettovcb/bin . If successful, there should be two files in here – ghettoVCB–restore.sh and ghettoVCB.sh scripts.
Using the text editor of your choice (e.g. vi ), edit /opt/ghettovcb/ghettoVCB.conf .
Adjust the configuration values as desired. Important values are;
VM_BACKUP_VOLUME – The path to your datastore where backups will be stored. For example, if your datastore was called “ISCSI-DATASTORE” and you wanted to store backups inside a folder called “Backups”, you would set this to /vmfs/volumes/ISCSI–DATASTORE/Backups .
VM_BACKUP_ROTATION_COUNT – The number of backups you wish to retain for each VM. Once more than this number of backups exist, the oldest one will be deleted automatically.
EMAIL_SERVER and EMAIL_SERVER_PORT – The address and port of the email server you want to send backup reports to. ghettoVCB uses netcat to directly speak the SMTP protocol to your email server and as a result authentication and SSL/TLS are not supported! If you don’t want email reports on backups, delete these configuration lines.
EMAIL_TO – The email address to send backup reports to. If you don’t want email reports on backups, delete this configuration lines.
Save and quit.
Now that ghettoVCB is installed and configured, we can run a backup. Firstly, try backing up a single VM using the command /opt/ghettovcb/bin/ghettoVCB.sh–g/opt/ghettovcb/ghettoVCB.conf–mVMNAME , replacing VMNAME with the name of your VM. ghettoVCB will then start backing up this VM to your configured datastore directory by creating a snapshot and then cloning the VM’s disks. Once complete, ghettoVCB will remove the snapshot and return you to the console.
Review the output of ghettoVCB for any errors. If the backup was successful, you should see the line ###### Final status: All VMs backed up OK! ###### in the output. Now, try backing up all of your VMs using the command /opt/ghettovcb/bin/ghettoVCB.sh–g/opt/ghettovcb/ghettoVCB.conf–a . Again, monitor the output of ghettoVCB and make sure that the backup completes successfully.
Running backups on a manual schedule is no fun though, so instead it’s possible to run ghettoVCB automatically using a cron job.
Using the text editor of your choice (e.g. vi ), edit /var/spool/cron/crontabs/root .
Add a new line containing 01***/opt/ghettovcb/bin/ghettoVCB.sh–g/opt/ghettovcb/ghettoVCB.conf–l/tmp/ghettoVCB.log–a>/dev/null .
Save and quit – as this file is typically read-only you may need to force your editor to save (in vi , use the command :wq! ).
What does this new line do?
At 1am every day, executes the backup command. You can change the time that the backup command is executed by changing the cron expression from 01*** . If you’re unfamiliar with cron-syntax, you can use a tool such as Cron Maker to generate a suitable cron-syntax.
Uses the configuration file /opt/ghettovcb/ghettoVCB.conf to adjust how the backup runs.
Outputs a log of the backup process to /tmp/ghettoVCB.log , which can be viewed later.
Includes all VMs on the host as a backup target. If you only want to backup a specific VM, you can change –a to –mVMNAME . To backup a list of VMs, change –a to –f/opt/ghettoVCB/vmlist and edit /opt/ghettoVCB/vmlist so that it contains a list of the VM names you wish to backup, one per line.
Redirects the output of the script to /dev/null , as some issues have been reported with running ghettoVCB with a cron job when the output isn’t redirected.
Extract your copy of Windows XP to a local drive, use DriverPacks BASE to slipstream (inject) the DriverPacks you require. Use nLite to create an ISO from the directory. Finally use WinToFlash to burn the image to USB.
Do this in the sequence above, accept the defaults in the GUI and it should just work. A good source for help and discussion is here and here.
I tried Rufus and it did not work, WinToFlash does some additional magic to make the USB drive bootable on very picky motherboards.
As we increase the block size and number of threads so does the performance up to a certain point. The hardware is capable of high performance without having to go and tweak a lot of things.
Technical information related to setting up FC-NVMe instead of FC (SCSI) seems to be non existing for the GEN3 and/or AIX at time of writing.
DRAID6 seems like a good alternative to the traditional RAID10 even for critical workloads, if you are concerned about disk failures then configure it for two rebuild areas which allows you to loose two disks.
If you are upgrading from a RAID10 spinning disk (15K) array to NVMe SSD array, strongly consider DRAID6 as an alternative.
It was not possible to assign a stand-by hot-spare in a DRAID6 configuration, you can still mark an SSD as a spare but the software will not automatically consume it if a disk fails.
No issues whatsoever failing one of the active/active storage nodes and monitoring the events inside AIX, everything worked as expected in a redundant path host and storage configuration.
Traditional Raid 10 Benchmarks (two mdisk devices, total of 16 drives + 1 spare)