It has been an active month, to say the least.
Days before the winter holidays, my smartphone (Asus Zenfone 8 Flip) stopped working almost completely. No mobile data (3G/LTE/5G) and barely-intermittent ability to handle calls/SMS. When contacting T-Mobile (my carrier at the time), they claimed that my device has somehow been locked by a previous carrier. That didn't make sense, since I originally purchased it new/unlocked. It didn't come through a previous carrier.
In the days that followed, I'd end up purchasing a 2nd (known-working/spare) phone to test their re-locked claim and to make sure that the modem/antenna on my original phone hadn't somehow stopped working. After more testing and research, I found that T-Mobile hay have dropped support for my device. This would then get confirmed, weeks into the New Year, with an automated SMS. After I had already decided to switch to a new carrier.
On the server project, things went from tame to wild. I was supposed to move to the DL580 Gen9 over the holidays, but that got delayed. I
updated Azure AD Connect on December 28th, which required some registry edits. On the 29th, I ended up creating a dedicated certificate for encrypting telecommunications. This was to be used for SSL/STARTLS and call encryption (hMailServer and FreePBX). While the old mail server didn't accept the certificate,
FreePBX did. Since hMailServer is currently due to be replaced, it not being able to use the certificate wasn't much of an issue. On January 9th, I was
reviewing this in relation to the current plans for the next phase (which involves a macOS VM). On January 13th the Windows Server VM started showing some
new errors in Event Viewer. Then the server PSOD'd, because macOS seemingly killed another, brand new USB card. In the grand scheme of things, this was a temporary scare -- but one that seemingly leaves me with no way to physically connect USB devices to that VM (in long-term). I had to remove the USB card from PCI Passthrough on that VM, sadly. In addition to that, the Windows Server VM had started throwing
one more new error.
On January 15th, I made the decision to use a dedicated hotspot (in opposed to activating multiple devices directly through the carrier). The hotspot in question is a NETGEAR Nighthawk M6 Pro (5G). On the 18th, while further testing call encryption, I ended up reading
this. On January 19th, the real challenges began. The 8TB SAS HDDs started going bad, one by one. The Artix VM (Docker container host) was the first to start throwing errors. It came in on Sunday, and I was immediately forced to re-checking the SMART data on my large-capacity drives. I attempted an emergency drive clone that day, which ran late into the evening -- which didn't work out. That spilled into the next day. I ended up being saved by the Timeshift backups, which had a recent-enough backup to not cause major disruptions...
Is what I would have said if
this didn't happen right afterward. The reverse proxy troubleshooting that I ended up doing would only lead into
MeshCentral troubleshooting, once I figured out how to configure NGINX (since I replaced NGINX Proxy Manager with it). On January 26th, I had to replace the 8TH SAS HDD for the Windows Server VM as well. That was handled through drive cloning (but not before stopping all services first). However, MailStore Server's database had to be restored from a backup. It somehow got corrupted during the cloning process, even though it wasn't running at the time. Another service that took a hit was Nextcloud -- specifically, its database (associated tables). That led to
MariaDB troubleshooting, which just ended recently. This morning, 2-3 RAM sticks seemed to have gone bad -- leaving me with 16GB less RAM until they (or the memory cartridge) are replaced. I'm currently monitoring the macOS VM, to make sure its 8TB SAS HDD doesn't go out before replacement. Will need to remove 3x 8TB SAS HDDs, a(nother) USB card, and the RAM at some point. I've already purchased replacements for everything but the USB card, since I'm considering USB-over-Ethernet. The storage enclosure that the 8TB HDDs sit in also has a strange caveat -- the UID LEDs don't work well with vSphere. I'll have to figure out how to remove the correct HDDs.
I'm hoping that nothing else comes up, so that I can begin to get back on track. I've delayed the server migration to August of this year, so that I'll hopefully have enough CTO to cover a few weeks.