The Airthings integration for HomeAssistant doesn't provide the status of the LED-ring in my Airthings Wave Plus. It does fetch the data I need though, and is required for this template.

Luckily Airthings have shared their thresholds, making it trivial to read data from Entities and making a new Entity for the LED-ring.

The template is pretty self-explanatory. Edit name of entities to match your installation.

A custom card could be created for changing the color of the ring-icon, but I haven't bothered.

Download the template-code here

Module 'CPUID' power on failed.

- Posted in VMware by

Even though the number of configured vCPUs in the VM wasn't changed nor excessive, a VM wouldn't cross-migrate (got an EVC error) hot.

After powering off the VM and migrating to another cluster, which was on the same EVC level, powering on the VM resulted in the error "Module 'CPUID' power on failed."

Inspecting the CPU Identification Mask settings (Edit Settings -> CPU > CPUID Mask -> Advanced) and resetting to default did not resolve the issue.

I assumed the VMX had some clues, and found multiple lines of cpuid-related flags:

[aners@derp:/vmfs/volumes/vsan:5..2]grep -i cpu *.vmx:

...
sched.cpu.units = "mhz"
cpuid.80000001.edx = "---- ---- ---0 ---- ---- ---- ---- ----"
cpuid.80000001.eax.amd = "---- ---- ---- ---- ---- ---- ---- ----"
cpuid.80000001.ebx.amd = "---- ---- ---- ---- ---- ---- ---- ----"
cpuid.80000001.ecx.amd = "---- ---- ---- ---- ---- ---- ---- ----"
cpuid.80000001.edx.amd = "---- ---- ---0 ---- ---- ---- ---- ----"
sched.cpu.latencySensitivity = "normal"
...

Removing all 'cpuid...'-lines from the VMX resolved the issue, the VM was now able to boot.

vSphere Remove snapshot task 0%, stuck?

- Posted in VMware by

When removing large snapshots, the task status is progressing towards 100%, or so it should be; sometimes it goes to 0% in the web UI and the user is left clueless.

Updating the web UI doesn't bring the current progress back. Luckyli the shell on the host can be used to retrieve the progress:

Chaining a few commands, will get the progress of the "Snapshot.remove"-task:

1) Get a list of all VMs and filter by the name of your VM: vim-cmd vmsvc/getallvms|grep -i garg|awk '{print $1}'

[root@virt58:/vmfs/volumes/vsan:5...2/4...9] vim-cmd vmsvc/getallvms|grep -i garg|awk '{print $1}'
72

The Vmid is returned, in this example 72

2) Verify the Vmid is in fact the VM you're interested in, fetch it's name: vim-cmd vmsvc/get.summary 72|grep name

[root@virt58:/vmfs/volumes/vsan:5...2/4...9] vim-cmd vmsvc/get.summary 72|grep name
name = "Gargoil",

3) Having verified the Vmid, get the running tasks: vim-cmd vmsvc/get.tasklist 72

(ManagedObjectReference) [
   'vim.Task:haTask-72-vim.vm.Snapshot.remove-138283664'
]

4) Copy the vim.Task identifier and get task_info, filter "state" and "progress": vim-cmd vimsvc/task_info haTask-72-vim.vm.Snapshot.remove-138283664|grep "state|progress"

   state = "running",
   progress = 86,

What the web UI failed to display, is that the "Snapshot.remove"-task is running and 86% complete, I guess this is why CLI is usually my favourite goto.

For more verbose output, remove the pipe to grep

When "Placement and Availability status" is "Unknown" for storage objects in vSAN, it can be as simple as an ISO mounted from another cluster. If so, simply unmount the ISO and return to the overview.

enter image description here

Consider the following scenario:

Wireguard (daemon) is running on a *:123/udp

Not always a great way out from a hotel network, since NTP is usually rate-limited - sometimes a great way out. Things change.

Instead of deciding on 1 service-port for Wireguard, having Wireguard transparently serve on more ports, seems like a good solution and does not require running multiple interfaces or services.

In the following example, iptables will translate requests coming in at port 8443/udp and redirect them to where Wireguard is actually listening; 123/udp

iptables -t nat -I PREROUTING -i ens160 -d 10.87.132.254/32 -p udp -m multiport --dports 8443  -j REDIRECT --to-ports 123

Now connecting to :8443/udp (and still 123/udp, obviously) will access Wireguard, just that it's translated internally.

As always, change the arguments to fit your environment.