First use the storcli binary to identify failed drives on each controller (sure, multiple instances of grep could be improved with regex)

./storcli /cALL/eALL/sALL show all|grep Failure|grep -vi predict

   Example output:

Status = Failure
/c0/e1/s5  Failure    46 -

  Start locating the failed drive:

./storcli /c0/e1/s5 start locate

  Example output:

CLI Version = 007.1017.0000.0000 May 10, 2019
Operating system = VMkernel 6.7.0
Controller = 0
Status = Success
Description = Start Drive Locate Succeeded.

   Stop locating the failed drive:

./storcli /c0/e1/s5 stop locate

  Example output:

CLI Version = 007.1017.0000.0000 May 10, 2019
Operating system = VMkernel 6.7.0
Controller = 0
Status = Success
Description = Stop Drive Locate Succeeded.

  To stop locate for all controllers, run the following command:

./storcli /cALL set activityforlocate=off

Run a DHCP-server in macOS Monterey (12.4)

- Posted in macOS by

1) Go to Network Preferences

2) Configure IP-address on the wired connection: 10.39.105.2/24

3) Edit a new file for the server-configuration, run sudo nano /etc/bootpd.plist and add:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
    <dict>
      <key>bootp_enabled</key>
      <false/>
      <key>detect_other_dhcp_server</key>
      <integer>1</integer>
      <key>dhcp_enabled</key>
      <array>
        <string>en0</string>
      </array>
      <key>reply_threshold_seconds</key>
      <integer>0</integer>
      <key>Subnets</key>
      <array>
        <dict>
          <key>allocate</key>
          <true/>
          <key>dhcp_router</key>
          <string>10.39.105.1</string>
          <key>lease_max</key>
          <integer>86400</integer>
          <key>lease_min</key>
          <integer>86400</integer>
          <key>name</key>
          <string>10.39.105</string>
          <key>net_address</key>
          <string>10.39.105.0</string>
          <key>net_mask</key>
          <string>255.255.255.0</string>
          <key>net_range</key>
          <array>
            <string>10.39.105.100</string>
            <string>10.39.105.200</string>
          </array>
        </dict>
      </array>
    </dict>
</plist>

Save the file by hitting Ctrl+X then typing y and then hit the enter-key

4) Start the DHCP-daemon: sudo /bin/launchctl load -w /System/Library/LaunchDaemons/bootps.plist  

5) Stop the DHCP-daemon: sudo /bin/launchctl unload -w /System/Library/LaunchDaemons/bootps.plist

Use ovftool to deploy the image directly to the ESXi-host instead:

ovftool -dm=thick -ds=<DATASTORENAME> -n=<VMNAME> --net:"VM Network"="<VMNETWORKNAME>" "junos-media-vsrx-x86-64-vmdisk-18.2R1.9.scsi.ova" vi://root@esxi-host.tld

Replace with your own settings

Enable IKE debug logging in Junos

- Posted in Juniper by

Enable IKE debug logging in Junos by configuring the following:

set security ike traceoptions file ike-debug
set security ike traceoptions file size 10m
set security ike traceoptions file files 2
set security ike traceoptions flag all
set security ike traceoptions level 15
set security ike traceoptions gateway-filter local-address 10.0.0.123 remote-address 172.16.0.123

The log file is written to /var/log/ - disable the configuration when it's no longer needed, to not wear down the CF/SSD in the device.

Extras:

request security ike debug-enable local 10.0.0.123 remote 172.16.0.123

 

show security ike traceoptions

 

show security ike debug-status

Create a RAM-drive in Linux

- Posted in Linux by

Add the following line to /etc/fstab to create an 8 GB RAM-drive in Linux with tmpfs

tmpfs           /mnt/ramdisk tmpfs      defaults,size=8192M 0 0

Mount with sudo mount -a and use /mnt/ramdisk/

If a snapshot seems stuck, use the console to verify a task is actually running:

1) Run vim-cmd vmsvc/getallvms and note the relevant VM-ID 2) Run vim-cmd vmsvc/get.tasklist <VM-ID> and note the Task-id 3) Run vim-cmd vimsvc/task_info <Task-id> to get task status 4) Browse to the VMs location on the datastore and run watch -d 'ls -lut | grep -E "delta|flat|sesparse"' to monitor the process

Unmap VMFS using esxcli

- Posted in VMware by

First fetch a list of VMFS:

esxcli storage filesystem list

For VMFS' where unmapping is supported, run:

esxcli storage vmfs unmap --volume-label=<label> | --volume-uuid=<uid>  [--reclaim-unit=<blocks>]

Junos, save dump to pcap-file

- Posted in Juniper by

To save monitoring to a pcap-file in Junos, use the write-file argument:

monitor traffic interface ge-0/0/1.0 write-file test.pcap

The file will be saved in /cf/var/home/<userid>/test.pcap

To read back the file in the Junos CLI, use the read-file argument:

monitor traffic read-file test.pcap

ESXTOP xterm, for unsupported terminals

- Posted in VMware by

Set TERM to xterm, before running esxtop to get a usable output, when the terminal/tty is not supported; run the following command to do so:

TERM=xterm esxtop

Get Virtual Machine uptime, with vim-cmd

- Posted in VMware by

Run vim-cmd vmsvc/getallvms to get a list of VM IDs (pipe to grep -i to filter)

With the ID from the second column, use the following command to fetch the uptime (replace 12345 with your VMs ID)

vim-cmd vmsvc/get.summary 12345 |grep "uptimeSeconds"