# Server Project



## TopHatProductions115 (Aug 14, 2020)

Hello there! I've hinted toward this project in various forum posts, but I'll cut to the chase. I'm in the final phase of a long-awaited server project, which began back in 2017/2018. However, I never got to actually acquire any of the parts until 2019/2020. With the massive gap in time, I had to change things up a bit, due to market forces and parts availability. The server is meant to replace (and extend) my current workstation - a Dell Precision T7500. This is what I've ended up with as of the 2020/2021 transition:

HPE ProLiant DL580 G7



```
OS   :: VMware ESXi 6.5u3 Enterprise Plus
    CPU  :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
    RAM  :: 256GB (64x4GB) PC3-10600R DDR3-1333 ECC
    STR  :: 1x HP Smart Array P410i Controller (integrated) +
                1x HGST HUSMM8040ASS200 MLC 400GB SSD (ESXi, vCenter Appliance, ISOs) +
                4x HP 507127-B21 300GB HDDs (ESXi guest datastores) +
                1x Western Digital WD Blue 3D NAND 500GB SSD +
                1x Intel 320 Series SSDSA2CW600G3 600GB SSD +
                1x Seagate Video ST500VT003 500GB HDD
    PCIe :: 1x HP 512843-001/591196-001 System I/O board +
                1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
    GPU  :: 1x nVIDIA GeForce GTX Titan Xp +
                1x AMD FirePro S9300 x2 (2x "AMD Radeon Fury X's")
    NIC  :: 1x HPE NC524SFP (489892-B21)
    STR  :: 1x LSI SAS 9201-16e HBA SAS card +
                1x Mini-SAS SFF-8088 cable +
                        1x Dell EMC KTN-STL3 (15x 3.5in HDD enclosure) +
                                4x HITACHI Ultrastar HUH728080AL4205 8TB HDDs +
                                4x IBM Storewise XIV v7000 98Y3241 4TB HDDs
    SFX  :: 1x Creative Sound Blaster Audigy Rx
    I/O  :: 1x Inateck KU8212 (USB 3.2) +
                1x Logitech K845 (Cherry MX Blue) +
                1x Dell MS819 Wired Mouse
            1x Inateck KU5211 (USB 3.2) +
                1x LG WH16NS40 BD-RE ODD
    PRP  :: 1x AOC U2879VF (4K)
            1x Sony Optiarc BluRay drive
    PSU  :: 4x HP 1200W PSU's (441830-001/438203-001)
```


The planned software configuration has been moved back to the LTT post, and will be changing often for the foreseeable future. I’m currently trying to replace my workstation with a VM, while also moving most of my workflow away from Windows. The process will be a slow one, and I still have parts coming in the mail.

ESXi itself is usually run from a USB thumb drive, but I have a drive dedicated to it. No harm done. A small amount of thin provisioning/overbooking (RAM only) won’t hurt. MacOS and Linux _would have gotten _a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces originally prevented this. Windows 10 gets the Audigy Rx and a Titan Xp. The macOS and Linux VMs get whatever audio the Titan Z FirePro S9300 x2 can provide. The whole purpose of NextCloud is to phase out the use of Google Drive/Photos, iCloud, Box.com, and other externally-hosted cloud services (Mega can stay though). The reason for why I’m doing this.


If you want more details about the project posted here, please let me know.


There are four other mirrors for this project, all of which get updated when new developments get made:

_Linus Tech Tips_
_Level1Techs_
_VMware VMTN_


----------



## TopHatProductions115 (Aug 15, 2020)

Replaced the Rosewill RASA-11001 with a Kingwin MKS-435TL, due to the fact that it doesn't need molex and will also have a cleaner look to it. Forgot how to edit posts here, sorry if I could have added the info to the OP above. Also planning out a long-term Tesla K80 upgrade...


----------



## TopHatProductions115 (Aug 16, 2020)

Grabbed the wired mouse. Now back to the waiting game, to see when everything will arrive in the mail...


----------



## phill (Aug 17, 2020)

Looking forward to seeing the outcome 

Although I do wonder, if 4 1200w PSUs will be enough??...


----------



## TopHatProductions115 (Aug 21, 2020)

The new drive cage arrived via Amazon. There's only one item left that hasn't arrived in the mail yet - my new mouse...


----------



## TopHatProductions115 (Aug 21, 2020)

Virtual Flash looks like fun. I might actually look into it more when the time comes...


----------



## TopHatProductions115 (Aug 21, 2020)

phill said:


> Looking forward to seeing the outcome
> 
> Although I do wonder, if 4 1200w PSUs will be enough??...



Thank goodness it only needs one of the PSUsat a time! Otherwise, I'd be toast XD


----------



## phill (Aug 22, 2020)

TopHatProductions115 said:


> Thank goodness it only needs one of the PSUsat a time! Otherwise, I'd be toast XD


I know that feeling!!!   I have a few servers here, boy they are power hungry!!   Looking forward to seeing another picture filled update!!


----------



## TopHatProductions115 (Aug 22, 2020)

So, here's the current plan:

Have ESXi run 24/7, in a low-power/on-demand state
Have Windows Server 2016 (VPN+AD) setup for 24/7 low-power operation as well
Have Artix Linux (Nextcloud) setup for 24/7 low-power operation too
Have MacOS setup for Hybrid Sleep after 20 minutes' inactivity and setup for Wake-on-LAN
Have Windows 10 setup for ^^^ as well, since it's not running anything mission critical
Don't power down ESXi unless I want to lose access to over half my infrastructure :3

Notes:

https://helgeklein.com/blog/2012/03/dead-simple-wake-on-lan-for-your-windows-server-at-home/


----------



## phill (Aug 24, 2020)

I look forward to seeing this all up and running


----------



## TopHatProductions115 (Aug 27, 2020)

So close, yet so far:
https://communities.vmware.com/message/2980320#2980320


----------



## phill (Aug 27, 2020)

Have you got spare server RAM you can change it out with to test @TopHatProductions115 ??


----------



## TopHatProductions115 (Aug 27, 2020)

phill said:


> Have you got spare server RAM you can change it out with to test @TopHatProductions115 ??



Yes. And I'm getting ready to run a memory diagnostic, to see which sticks may be failing.


----------



## TopHatProductions115 (Aug 28, 2020)

phill said:


> Have you got spare server RAM you can change it out with to test @TopHatProductions115 ??



Crisis averted. From the looks of it, the issue was caused by me running a slightly older version of ESXi 6.5 (U2). VMware's servers have been a bit unreliable for downloads recently, so I had to dig for a copy of ESXi 6.5 U3 on the internet. Now troubleshooting the next issue:

https://communities.vmware.com/message/2980646#2980646

Also have to look into this at some point (even though the server will be mostly hiding behind a VPN):

https://kb.vmware.com/s/article/55636


----------



## TopHatProductions115 (Aug 28, 2020)

I'm getting ready to reinstall ESXi 6.5 U3, since HPE's Smart Array assumed I was gonna 'RAID all da things' and lose over half my storage to sub-optimally configured RAID arrays that it made (one of which was in perpetual recovery and included an unsupported SSD). Had to manually configure each drive as a logical volume with RAID 0 fault tolerance, treating each drive as its own virtual array of one disk. I ended up RAIDing all the things anyway, by that logic.


----------



## TopHatProductions115 (Aug 28, 2020)

Now to address the SSD issue in the background, while I look at security patches and initial VM setup.


----------



## TopHatProductions115 (Aug 29, 2020)

Currently trying to make a new datastore via SSH. More info here:

https://communities.vmware.com/message/2980777#2980777


----------



## TopHatProductions115 (Aug 30, 2020)

So close to the final stage:

 Failed to create VMFS datastore - Cannot change the host configuration.


----------



## TopHatProductions115 (Aug 31, 2020)

While I'm waiting on comments for the previous issue, and a few ISOs to upload to my server, I can start working on investigating this:

https://kb.vmware.com/s/article/55636
https://kb.vmware.com/s/article/55806
https://www.vmware.com/security/advisories/VMSA-2018-0020.html
https://kb.vmware.com/s/article/56547
https://kb.vmware.com/s/article/56563
https://kb.vmware.com/s/article/56896

Gotta search for the patches through this page, by entering the details mentioned in the last 3 kb pages:

https://my.vmware.com/group/vmware/patch#search


On a side note, also ran into this when setting up my first VM:

http://www.yellow-bricks.com/2018/0...when-powering-on-vm-on-vsphere/#comment-77449
Reserve Memory beforehand, I guess


----------



## TopHatProductions115 (Aug 31, 2020)

Here to update the Reddit mirror's link - those expire every 6 months

https://www.reddit.com/user/TopHatProductions115/comments/ijoeg0/project_personal_datacentre/


----------



## phill (Aug 31, 2020)

Someone has been having fun!!  Gotta love trouble shooting don't you??    I hope it all settles out soon for you!!


----------



## TopHatProductions115 (Sep 1, 2020)

phill said:


> Someone has been having fun!!  Gotta love trouble shooting don't you??    I hope it all settles out soon for you!!



Yeah. Still privately working out how to get my HBA working in ESXi as well, before I send that issue to VMware's forums as well XD

It's been hectic...


----------



## phill (Sep 1, 2020)

TopHatProductions115 said:


> Yeah. Still privately working out how to get my HBA working in ESXi as well, before I send that issue to VMware's forums as well XD
> 
> It's been hectic...


Sometimes I do ask myself the question about all that I do at home, for home and I say, is it all worth the hassle sometimes 

I hope you get it sorted out mate and can report back all of the great work you've been doing, oh and don't forget a load of pictures!!


----------



## TopHatProductions115 (Sep 1, 2020)

Oh, I will :3






						Re: Failed to create VMFS datastore - Cannot change the host configuration.
					

Hello TopHat,  I was asking that because is not available on the HTML5 Client. For configuring VFFS you need to login to vCenter using the Flash client (vSphere Web Client). This is the URL you have to access and you can use same credentials: https://vcenter_server_fqdn/vsphere-client




					communities.vmware.com


----------



## TopHatProductions115 (Sep 6, 2020)

phill said:


> Sometimes I do ask myself the question about all that I do at home, for home and I say, is it all worth the hassle sometimes
> 
> I hope you get it sorted out mate and can report back all of the great work you've been doing, oh and don't forget a load of pictures!!



One step closer


----------



## TopHatProductions115 (Sep 11, 2020)

Solved the SSD/Virtual Flash issue! Now onto the next one...


----------



## TopHatProductions115 (Sep 11, 2020)

Removed the HP 491838-001 (NC375i) due to space constraints, increased RAM to 128GB, purchased 4TB HDDs (IBM Storwize V7000 98Y3241's) to replace the 2TB HUA722020ALA330's, and delaying the addition of the SolarFlare NIC.


----------



## TopHatProductions115 (Sep 13, 2020)

Currently working on DNS, after which I'll focus on setting up the first VPN solution - SoftEther.


----------



## TopHatProductions115 (Sep 16, 2020)

The hunt continues:

https://communities.vmware.com/message/2984297#2984297


----------



## TopHatProductions115 (Sep 16, 2020)

The VMware mirror for this project will no longer be maintained:

https://communities.vmware.com/message/2984428#2984428
Moving on...


----------



## TopHatProductions115 (Sep 17, 2020)

The VMware mirror for this project has been reopened. If you have any questions, feel free to DM me. Just got a new GPU in the mail, which may end up replacing the GTX 1060 6GB. Still troubleshooting this issue...


----------



## phill (Sep 17, 2020)

I'm glad to see some progress here @TopHatProductions115 !!    Is it being a pain to still get up and running how you want or just a few teething issues??


----------



## TopHatProductions115 (Sep 18, 2020)

phill said:


> I'm glad to see some progress here @TopHatProductions115 !!    Is it being a pain to still get up and running how you want or just a few teething issues??



For the most part, it's been pretty easy. Just working on DNS (for AD and VPN) and HBA (mass storage), so that the other VMs will have resource access from the get-go.


----------



## TopHatProductions115 (Sep 19, 2020)

May have to look into this sometime today as well:

https://blogs.vmware.com/vsphere/2019/08/changing-your-vcenter-servers-fqdn.html


----------



## TopHatProductions115 (Sep 20, 2020)

Many things happened in the past 12 days:

https://linustechtips.com/main/profile/511347-tophatproductions115/?status=276274&type=status
https://linustechtips.com/main/profile/511347-tophatproductions115/?status=277066&type=status
https://linustechtips.com/main/profile/511347-tophatproductions115/?status=277305&type=status
In addition to events on software side of things. I'll have to do a livestream sometime either today or tomorrow...


----------



## phill (Sep 20, 2020)

The most important question I have is, does it work how you want it too yet??


----------



## TopHatProductions115 (Sep 21, 2020)

phill said:


> The most important question I have is, does it work how you want it too yet??



Not yet  But that's the journey - getting it there, and then keeping it there...


----------



## phill (Sep 21, 2020)

Please keep up with the updates    I can't wait to hear it's all working!!


----------



## TopHatProductions115 (Sep 23, 2020)

The K80's are coming...


----------



## TopHatProductions115 (Sep 26, 2020)

I've got a GRID K520 coming in the mail in about 2 weeks, to replace the Tesla K10. Perhaps I can have one of my Tesla K10's modded into a GRID K2 (or even just buy one) in the near future, so that I can have all three of the major variants for this card. GRID K520 looks like a GeForce card from inside a VM, if my memory isn't failing me. GRID K2 would be the Quadro variant. Tesla K10 is a pure compute version. I wonder if anything like that exists for Tesla K80...


----------



## TopHatProductions115 (Sep 27, 2020)

Just solved another looming issue for the server project. Now to get that SSD working and added to the Virtual Flash resource pool...


----------



## TopHatProductions115 (Oct 12, 2020)

P.S. If I could, I'd totally update the OP to reflect the current version of the project XD


Time for a long-overdue project update. Omitting a lot of steps/details here, for relative brevity. A friend of mine, from Discord (the same one who was kind enough to help me troubleshoot the many of the issues I encountered), had me run a Linux LiveCD on the server to troubleshoot the LSI HBA. For those of you who did not know, the LSI HBA wasn’t working as expected until a few hours ago (late last night). I tested it in my current workstation (Precision T7500 - Windows 10), the server (DL580 G7 – ESXi 6.5u3), and even on my laptop (EliteBook 8770w - Windows 10). When tested on the T7500, the HBA showed up – but none of the 4TB hard drives showed up. The same for the laptop and the server. After a bit of Googling (as the cool kids say), I decided that it may behoove me to try flashing it with the IT firmware, to see if that would fix it. I did so from my laptop, by making use of a powered PCIe dock (to prevent further downtime on the T7500 – running a Minecraft server). I did so, using a GUI application called MegaRAID Storage Manager. The HBA was on v17.X, and now it's on v20.X. The drives also appeared in Windows Device Manager for once. However, they didn't stay in Device Manager for long. They popped in and out, sporadically. I was instructed to reboot after the firmware update was applied. MegaRAID Storage Manager stopped being able to connect to the local server after the reboot it said to do, for the firmware update to take hold. That meant that, if the firmware I flashed was the wrong one, I’d have to resort to using sas2flash. After no luck checking on the HBA from my laptop, I decided to put it in the server, with the Linux LiveCD (as mentioned earlier). The Linux LiveCD was running an older build of Manjaro, and managed to see all of the drives in gparted. However, we were unable to get SMART data for most of the HDDs. If you look closely at the HDD models, you may or may not be able to tell why. However, while I was in the LiveCD, I decided to also try GPT scheming the Intel SSD as well, since messing with it in Windows simply did not work for some reason. A short while later, we tried the latest Manjaro LiveCD available (because Manjaro is my preferred distro with sysemd). That one didn’t see the drives at all, but did still see the HBA. At this point, I saw no other way to validate the HDDs further. I made the decision to test them in ESXi and try to pull SMART data from esxcli. The drives showed up in ESXi, and even allowed for us to pull SMART data – but it was limited, in a different format than most common drives on the market. I was able to add the Intel SSD to the Virtual Flash pool for once, though. As such, this is strictly a partial victory. We have the drives ready for use, presumably. But we don’t know how the drives are doing – which is very different from all of my previous experiences, where I could pull up SMART data immediately after installing the drives. The game is afoot.


----------



## phill (Oct 12, 2020)

I've added a note for the mods to help with the request as I'm unsure how to let you edit that first post   So when I hear back my good man, I'll let you know! 

Nothing is ever simple is it??!


----------



## TopHatProductions115 (Oct 14, 2020)

On a side note, the results of last night's livestreaming attempt are tempting me to make YouPHPTube part of the project again. If this keeps up, I might actually go for it...


----------



## TopHatProductions115 (Oct 14, 2020)

phill said:


> I've added a note for the mods to help with the request as I'm unsure how to let you edit that first post   So when I hear back my good man, I'll let you know!
> 
> Nothing is ever simple is it??!



Thank you! I went on and edited the OP, so it is now up-to-date 



As detailed here, I'm looking into getting some equipment for the server again. Maybe I'll have somewhere to put a UPS this time. But only if the requirements are met. Otherwise, the funds will go elsewhere. Pricing doesn't stay this good for long. One month tops...


----------



## TopHatProductions115 (Oct 17, 2020)

New equipment purchases incoming, possibly:

https://linustechtips.com/main/profile/511347-tophatproductions115/?status=279647&type=status


----------



## TopHatProductions115 (Oct 31, 2020)

Currently planning an equipment purchase


__
		https://www.reddit.com/r/JDM_WAAAT/comments/jl9al5


----------



## TopHatProductions115 (Nov 8, 2020)

ToDo List for the next few days:

Figure out Split Horizon DNS records (Technitium)
Setup ejabberd and hMailServer
FQDNs and subdomains
AD/LDAP integrations

Setup Artix Linux VM
secondary Technitium instance (AD DNS forwarding)


----------



## TopHatProductions115 (Nov 10, 2020)

Coming soon:

https://gridforums.nvidia.com/defau...-discussion/proper-installation-of-grid-k520/


----------



## kayjay010101 (Nov 10, 2020)

Regarding GRID, have you seen Craft Computing (on YouTube)'s Cloud Gaming build series? Don't remember the details, but he goes over how it all works and how if you just stick to the first gen of cards it's free. If you're using the second gen or newer the licensing cost is astronomical.
After almost a year and seven episodes, going from GRID K2's to Tesla M60s, and finally to 3x FirePro's, he still hasn't managed to make it work properly. One stream is fine, but as soon as you have multiple streams the cards always run into power limits.


----------



## TopHatProductions115 (Nov 10, 2020)

kayjay010101 said:


> Regarding GRID, have you seen Craft Computing (on YouTube)'s Cloud Gaming build series? Don't remember the details, but he goes over how it all works and how if you just stick to the first gen of cards it's free. If you're using the second gen or newer the licensing cost is astronomical.
> After almost a year and seven episodes, going from GRID K2's to Tesla M60s, and finally to 3x FirePro's, he still hasn't managed to make it work properly. One stream is fine, but as soon as you have multiple streams the cards always run into power limits.




Yes, I have been following Craft Computing's Cloud Gaming series on YouTube  Ironically, I started working on this very idea/concept for the server project in mid-2018. Almost every single step that he's taken thus far (in the GPU department), I've already taken into some form of consideration. I ended up on nVIDIA GRID cards due to used market price and platform costs. GRID K520 to be exact.
I wonder how I'll fare when I attempt it. The DL580 G7 packs 1.2kW PSUs under the hood, and I'm only powering one GRID card (in opposed to multiple).


----------



## kayjay010101 (Nov 11, 2020)

TopHatProductions115 said:


> Yes, I have been following Craft Computing's Cloud Gaming series on YouTube  Ironically, I started working on this very idea/concept for the server project in mid-2018. Almost every single step that he's taken thus far (in the GPU department), I've already taken into some form of consideration. I ended up on nVIDIA GRID cards due to used market price and platform costs. GRID K520 to be exact


Gotcha. That's the same as the 6xx series (GK104), so that should be first gen and free in terms of licensing. Stepping up to the next gen incurs heavy licensing costs for GRID to function. That's why Craft Computing is now experimenting with AMD FirePro's.



TopHatProductions115 said:


> I wonder how I'll fare when I attempt it. The DL580 G7 packs 1.2kW PSUs under the hood, and I'm only powering one GRID card (in opposed to multiple).


As far as I understood Craft Computing's latest video in the series, his 1600W PSU was more than enough for the task, the problem was the total power of the GPUs was limited to 300W. This meant each GPU die was limited to just 150W. (The K520s is effectively 2x GTX 670 dies, so each gets half the total power) In the case of the K520, this might be even worse, since it's limited to one PCIe 8pin, so total power is 225 and each GPU is limited to about 100W.

In Craft Computing's case, just running one instance of Crysis ate up 100W, and that was at just 40% utilization. As soon as he started doing multiple streams, the cards were starved for power completely and stuttered like hell. In addition, there was no headroom for video encoding so parsec (which he used to stream the games to another PC remotely; the whole point of the project) got massive encoding issues and the stream stuttered and artifacted. As of now he's gone through 3 different GPUs and still hasn't found one that is cheap enough to be viable, and also be able to handle multiple 1080p60 streams.


----------



## TopHatProductions115 (Nov 11, 2020)

The experimentation with the FirePros appear to have gone even worse than the GRID cards, sadly. I wish it could have worked, for the sake of a non-Windows VM. But one would need to use Software 264 at that point.
That is true. But who says that I need 1080p60fps for what these cards will be used for  Also:


----------



## kayjay010101 (Nov 11, 2020)

TopHatProductions115 said:


> The experimentation with the FirePros appear to have gone even worse than the GRID cards, sadly. I wish it could have worked, for the sake of a non-Windows VM. But one would need to use Software 264 at that point.


Yeah, it's a shame. Hopefully he manages to get it right at some point!



TopHatProductions115 said:


> That is true. But who says that I need 1080p60fps for what these cards will be used for  Also:


Great that it's working out for your use case, of course! Which I haven't caught yet, what are you going to use the GPUs for?


----------



## TopHatProductions115 (Nov 11, 2020)

kayjay010101 said:


> Yeah, it's a shame. Hopefully he manages to get it right at some point!
> ...
> Great that it's working out for your use case, of course! Which I haven't caught yet, what are you going to use the GPUs for?



The GRID K520 will be used for a Linux VM and (hopefully) a MacOS VM, for non-gaming workloads.

https://www.tonymacx86.com/threads/...-of-desktop-cards-with-native-support.283700/
https://dortania.github.io/GPU-Buyers-Guide/modern-gpus/nvidia-gpu.html#kepler-series-gtx-6xx-7xx
For the purposes, please see the OP. That lists the major roles and expectations for each VM in the project.


----------



## TopHatProductions115 (Nov 11, 2020)

Just removed YaCy from the project, in favour of researching YaCy Grid. Here's to hoping I can get it working in a shared environment...


----------



## TopHatProductions115 (Nov 28, 2020)

I currently have a PCIe WiFi NIC coming in the mail. I also have a pair of Ethernet NICs sitting in inventory. The server already has a SolarFlare SFN5322F sitting in it. What if I threw FRRouting onto a Linux VM, and passed through the mentioned NICs to it? Sounds like a virtual managed switch in the making. I could have the Linux VM use the wireless NIC to connect to the house WiFi on one network (192.168.1.0), and have it sit at an arbitrary address (Perhaps 192.168.1.2). Then have the wired NICs be used for an internally-managed network (10.0.0.0). Setup the Linux VM as the default gateway (Maybe 10.12.7.1), have it handle DHCP and internal DNS. Last step would be to route all outbound traffic from clients on 10.0.0.0 through 10.12.7.1 => 192.168.1.2 . All outbound traffic from 10.0.0.0 clients will appear to come from 192.168.1.2, which sounds similar to NAT (many clients/private IPs behind one gateway/public IP). Setup forwarding rules and throw the Linux VM sitting at 192.168.1.2 into the DMZ (since port forwarding on the new ISP router is utter garbage for some reason). That would kill off the need for a router/extender in my room, assuming that the only untested component in this equation works - the WiFi adapter I got from overseas. Also still need to work on this. The rack-mounting kit for my server is ~200 USD by itself - yikes...

Questions that I asked recently in other places:



__
		https://www.reddit.com/r/HomeServer/comments/k2spuc



__
		https://www.reddit.com/r/HomeDataCenter/comments/k2t1uy


----------



## TopHatProductions115 (Dec 5, 2020)

Just purchased an HPE 641255-001 (PCIe ioDuo MLC I/O Accelerator) for the server. Not sure if it will work in ESXi. I guess there's only one way to find out!


On a lesser note, the VMware Communities mirror seems to be officially ded this time. Can't edit the OP or add replies. Not sure if I want to go through the trouble of getting it revived myself. If anyone here wants it brought back, feel free to say so...


----------



## TopHatProductions115 (Dec 13, 2020)

Okay, things got off to a rocky start with the new NICs:










The vCenter Server Appliance hasn't been treating me too nice lately, either. It went from just taking 15-30 minutes to start up (consistently) to sometimes not starting at all - and needing a reboot. The HTML5 UI also is buggy in my case, and doesn't accept my credentials half the time. So I'm stuck using the FLEX UI instead - which requires Flash Player (another EOL technology) and Internet Explorer (a deprecated browser) to make it work. I'm not getting the benefits that I should have gotten from it, for one reason or another, which is unfortunate. Also can't seem to make it keep my Virtual Flash Resource Pool between server reboots, which is another small gripe. Can't manage that if vCenter Server won't start up. I might have reason to switch to the Windows-embedded version instead, sadly. That one may be more reliable in my case...


----------



## TopHatProductions115 (Jan 2, 2021)

Today may be good for another round of server testing. Perhaps I can get the HPE 10GbE NIC working for once.

Still have to attend to this after all of the VMs are setup:








						nVIDIA Tesla K80 :: Questions
					

Hi! I don't post here often, so I'll try my best to adhere to expected forum conventions. If this is the wrong section/sub-forum for this thread, please let me know. I'll keep this short.  I'm currently building a virtualisation server, using a pair of nVIDIA Tesla K10's (by modding them into...




					www.techpowerup.com


----------



## TopHatProductions115 (Jan 3, 2021)

Just got the HP 10GbE NIC working! This means that I may be able to start my 10GbE transition in the next few months. Next will be my 4K60fps livestreaming transition getting the IO Accelerator working


----------



## TopHatProductions115 (Jan 3, 2021)

It's been a slow weekend playing with the server. On Thursday, I couldn't get anything done because of New Year's (which I am fine with). On Friday, I slept in due to how late I stayed up, and then had surprise visitors. Didn't get any work done that day, since I was busy keeping the visitor's kids out of the room. On Saturday, I finally got to throw in the HP NC524SFP NIC (along with its memory module). Once they were attached to the SPI board, I fired up the server and checked to see if the 16TB drive cage and ~1TB Virtual Flash Resource Pool showed up in ESXi - that of which they did 
	

	
	
		
		

		
			





FYI, just about every time I add new hardware to the DL580 G7, I check for those two things - because they tend to act as immediate indicators for whether something is wrong, strangely enough. That's when no other problem indicators are present (which there rarely ever are). vCentre has thrown an occasional warning, but nothing of consequence from what I've seen thus far.

After that, I spent most of last night changing my AD and DNS settings, to prepare for adding my first devices to AD. That went on until close to midnight, and is still not quite done yet. Today, I replaced the SolarFlare SFN5322F with an HPE 641255-001 (PCIe ioDuo MLC I/O Accelerator) - a gutsy move with how the server can be with adding new hardware. At first, only 2/4 SAS HDDs showed up in ESXi. After a reboot, and letting the server warm up for a bit, all storage devices and new components showed up. So far, so good!

However, due to how slow testing has been, I had to put off testing the Tesla K80's and DERAPID PCE-AX200T wireless NIC. If I can get the DERAPID PCE-AX200T working, the Linux VM is definitely going to run an FRRouting instance. Still need to figure out the vCenter startup time issue. At least I can start the 10GbE transition soon...


----------



## TopHatProductions115 (Jan 11, 2021)

Just attached the rail kit to the server, in preparation for the rack that's coming in the mail this week. Can't wait to take photos of the finished result...


----------



## TopHatProductions115 (Jan 16, 2021)




----------



## TopHatProductions115 (Jan 17, 2021)

Getting ready to kick ejabberd from Windows Server, due to reliability issues observed during initial testing. Probably going to the Arch VM instead...


__
		https://www.reddit.com/r/activedirectory/comments/kyxf73


----------



## TopHatProductions115 (Jan 18, 2021)

Also forgot to note in the previous reply, I need to upgrade the vCenter Appliance from 6.5 to 6.7u3, due to FLEX getting EOL'd. Fun times XD


----------



## TopHatProductions115 (Jan 27, 2021)

From what I can tell, I may have to start from scratch with both vCenter and AD. But, if I manage to pull it off, I would have a few spare CPU cores and a datastore to use for something else.

Also, found this:

https://www.1strategy.com/blog/2017/07/18/can-you-game-in-the-cloud-yes/
Time to see if I can find instructions for OS's outside of Windows...


----------



## TopHatProductions115 (Feb 1, 2021)

I would have held out for VCSA 6.5 indefinitely if the HTML5 UI was able to manage Virtual Flash/Host Cache resource pools. As noted in past updates, the VCSA took anywhere from 20-45 minutes to initialise. And with the deprecation of FLEX UI (reliant on Adobe Flash - unsupported in 2021), the now-neutred vCenter Server Appliance VM (6.5) had no practical place in this project. Without the option for an in-place upgrade to a newer version, I also do not have the ability to upgrade to VCSA 6.7. It has been replaced, and will soon be decommissioned. vCenter has been moved to the Windows Server 2016 VM, for practicality reasons. The next step is to re-build the failed MS AD instance and promote a new domain controller. That will happen later this week. Hopefully, things will go a bit better this time around...


----------



## TopHatProductions115 (Feb 7, 2021)

Alright - everything is almost ready for Active Directory setup, attempt #2. Not only did I kick ejabberd to Linux (due to issues when installed on Windows), but I also had to re-install multiple other applications. Demoting the AD DC appears to have been what led to it. So, I got to start from scratch in some sense. Still need to make a new SQL db for hMailServer, unlike last time. But, that should be relatively easy. Already installed vCenter Server, and it starts up way faster than the VCSA. Windows doesn't even take longer to boot from what I've seen. Had what appears to have been an unexpected part failure as well - the Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable. Got that replaced, and now can see all of my SAS HDDs once again. Last step is to (re-)promote the DC and test client devices. This time, I'll set the intended domain from the start (instead of setting it to something else by accident and having to change it twice later).


----------



## TopHatProductions115 (Feb 12, 2021)

Backup Complete!


----------



## comtek (Feb 15, 2021)

TopHatProductions115 said:


> View attachment 184289


Must be super hot and noisy running the setup for 24/7. You don't have battery backup?


----------



## TopHatProductions115 (Feb 15, 2021)

comtek said:


> Must be super hot and noisy running the setup for 24/7. You don't have battery backup?


1) Not really - the server doesn't run 24/7 yet, and it doesn't heat up the room as much as I'd expect. But it is winter currently, so I just leave the window cracked.
2) Not yet. Still searching for affordable UPS's, since I will need at least 2-3 when everything's ready. Also need rack shelves to put the UPS's on.


----------



## Aht0s (Feb 15, 2021)

Really cool! 


TopHatProductions115 said:


> Alright - everything is almost ready for Active Directory setup, attempt #2. Not only did I kick ejabberd to Linux (due to issues when installed on Windows), but I also had to re-install multiple other applications. Demoting the AD DC appears to have been what led to it. So, I got to start from scratch in some sense. Still need to make a new SQL db for hMailServer, unlike last time. But, that should be relatively easy. Already installed vCenter Server, and it starts up way faster than the VCSA. Windows doesn't even take longer to boot from what I've seen. Had what appears to have been an unexpected part failure as well - the Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable. Got that replaced, and now can see all of my SAS HDDs once again. Last step is to (re-)promote the DC and test client devices. This time, I'll set the intended domain from the start (instead of setting it to something else by accident and having to change it twice later).


Do you mean, you change your domain name twice? If so did it breaks any relation with your clients? Curious, I am planning to rename my lab as well to a different domain but read that it's not wise to do so. I am about to rebuild a new one instead and jumping from 2016 to 2019.


----------



## TopHatProductions115 (Feb 15, 2021)

Aht0s said:


> Really cool!
> 
> Do you mean, you change your domain name twice? If so did it breaks any relation with your clients? Curious, I am planning to rename my lab as well to a different domain but read that it's not wise to do so. I am about to rebuild a new one instead and jumping from 2016 to 2019.


I renamed mine multiple times, but never got the chance to test it due to DNS issues. I'd suggest starting fresh, just to be safe. If using multiple DNS servers, be sure to have your NS records straight.


----------



## Aht0s (Feb 15, 2021)

TopHatProductions115 said:


> I renamed mine multiple times, but never got the chance to test it due to DNS issues. I'd suggest starting fresh, just to be safe. If using multiple DNS servers, be sure to have your NS records straight.


Thank you!


----------



## TopHatProductions115 (Feb 16, 2021)

Okay, I've some project news.

I had to disable the vCenter Server for Windows instance to get the Active Directory instance installed. But I didn't think to change any networking settings on the vCenter Server (embedded - 6.7) instance before disabling it. With the help of a friend, I managed to fix my DNS and get the Active Directory instance working. It was due to some missing NS records. Once I cleaned up the DNS, I was actually able to get a client device joined to the AD. Now I have to figure out how to make Windows clients connect to the VPN before attempting LDAP sign in, since the AD is VPN-locked. Once I figure that out, I will be able to add any devices I want. Also have to see if I can bind vCenter Server for Windows to a single IP address while it's disabled. Otherwise, I'll have to resort to using the relatively cruddy VCSA again - and who knows how that will go in long term. The last time I used it, it started throwing up more warnings and errors than before, which leads me to question the overall longevity and performance of it. Almost tempted to go without vCenter and Virtual Flash because of the trouble. At least I can start focusing on the rest of the project more in the near future...

Without vCenter, how will I be able to add the Precision T7500, as an ESXi host, to my datacentre? For vMotion?

Also have to figure out whether (and if so, how) to destroy the Virtual Flash resource pool or not...


----------



## TopHatProductions115 (Feb 19, 2021)

Just bought four of these:

https://www.ebay.com/itm/HGST-8TB-S...itachi-Ultrastar-HUH728080AL4205/114668136438
HITACHI Ultrastar HUH728080AL4205 (HGST)

32TB upgrade, here I come...


----------



## TopHatProductions115 (Feb 21, 2021)

vCSA 6.5 is practically neutred without the use of Adobe Flash, and the HTML5 UI was almost useless until at least 6.7u3. The settings I do have in the current install are mostly small ones, but could only be reversed via the FLEX UI (Flash). vCenter also doesn't allow for in-place upgrades. So, it's time to kill the current vCSA and start from scratch. If I had known to look out for the death of Flash, I could have been ahead of this. But, got held up by other responsibilities. Today, I'm re-installing vCSA. Today's going to be a long day...


----------



## TopHatProductions115 (Feb 27, 2021)

Well, I have more news. I managed to kill the old vCSA (6.5) instance and replace it with a newer (6.7) version. The newer version has a dark theme - nice. Also is pretty well organised, and connected to my ESXi server with no issues. However, Virtual Flash is pretty much dead. I will have to assign the SSDs to something else now. Perhaps I can start setting up the next VM...

On a side note, the current Reddit project mirror is ded again - because those expire every 6 months, regardless of activity. I think it'll stay ded this time. Not in the mood to make yet another one...


----------



## TopHatProductions115 (Feb 27, 2021)




----------



## TopHatProductions115 (Mar 7, 2021)

Currently installing Ms SQL Server 2019 for a test drive. Then migrating over to the 8TB SAS HDDs completely.

https://sqlserver-help.com/2014/05/...e-or-filegroup-the-file-must-be-decompressed/
Gonna have to redo the backups - had no way of imaging the 4TB HDD before swapping in the 8TB HDD. But enough changes have been made that the old backup is no longer valid.


----------



## TopHatProductions115 (Mar 12, 2021)

GPU Interest Checks:

https://www.techpowerup.com/forums/threads/nvidia-tesla-k80-questions.259865/
https://www.techpowerup.com/forums/threads/pertaining-to-the-firepro-s9300-x2.276696/
Not sure if I'll ever get my hands on the AMD card. That would be an interesting card to try out, once I figure out the K80's. The K80's are due for a VirtualGL experiment soon.

Also, just updated the OP(s) for each mirror. Please let me know if anything seems to be missing from one mirror or the other. I intend to work on the server later today, assuming that nothing interferes...


----------



## TopHatProductions115 (Mar 14, 2021)

Troubleshooting the hMailServer installation:

https://www.hmailserver.com/forum/viewtopic.php?f=7&t=36243&p=227488#p227488
Definitely not a fun time. Kinda wishing that SQL Compact Edition worked like it did last time (no idea how, though)...


----------



## TopHatProductions115 (Mar 14, 2021)

May divert my attention from hMailServer for a bit and skip right to the Linux VM if this doesn't get resolved in the next week or so. This has been dragging on for a while now, and I want to get the rest of the server ready. While hMailServer would be nice, I also have other matters to attend to. And it appears that hMailServer's most recent release is 32-bit. It may be having issues working with Ms SQL Express, which is the free edition for Ms SQL Server - because I used a 64-bit release of it? If this keeps up, I may take the mail server role and toss it to Linux as well. Can't even begin to think about touching Exchange Server...


----------



## TopHatProductions115 (Mar 19, 2021)

Had a momentary power outage today, which took most of my equipment offline again (for the umpteenth time). I've finally decided to just tough out the cost and buy a pair of UPS's this weekend. Time to see if I can get things straightened up around here. They'll have to sit on the floor since I haven't purchased proper rack shelves for them yet. They're both going to be Liebert GXT3 1350W units. No more playing with fire...


----------



## TopHatProductions115 (Mar 25, 2021)

Once the UPS's get here, I'll be able to get both the server and the workstation protected. Also found out that one of the DIMM slots on the T7500's motherboard went out, so I swapped a 4GB stick for a 16GB DIMM I had laying around. 


MariaDB works well with hMailServer so far, and now I'm trying to add a CA to the project, for future security considerations:

https://www.azure365pro.com/install-and-configure-certificate-authority-in-windows-server-2016/

Once the CA is ready, it'll be time to get crackin' on the Artix Linux VM.


----------



## TopHatProductions115 (Mar 27, 2021)

vCenter certificate replacement, with an MS AD CA:












https://kb.vmware.com/s/article/2112014
https://kb.vmware.com/s/article/2112277?lang=en_US
Also managed to actually get ejabberd working in one go, from what I can tell. Looks like this one could be here to stay. One less thing to save for later...


----------



## TopHatProductions115 (Mar 30, 2021)

Once the Linux VM is off the ground, the time for this will be near:

https://github.com/shanyungyang/esxi-unlocker
Mojave will be the first test Candidate, as planned. But first, Artix OpenRC...


----------



## TopHatProductions115 (Mar 31, 2021)

Firstly, I need to re-install my Artix OpenRC VM - got the partitions all wrong. Also need to get the WiFi adapter back in the server, for the Linux VM (router/NAT). I'll do those tasks sometime this week, after work.

Then I need to dust out and service my first UPS this weekend. It arrived this afternoon. When I plugged it in, it showed the following symptoms:


beeps every 5-6 seconds
Fault and AC Input indicators glow steadily
Battery indicator blinks
Bypass and Inverter indicators are off
A second, pristine UPS should be arriving in the next week or so. I'll use that one on the server when it arrives, and clean up the current one for the T7500.


----------



## TopHatProductions115 (Apr 2, 2021)

New partition setup for Artix OpenRC VM:

300GB SAS HDD
8MB, unformatted, [!mnt_point] (bios_grub)
512MB, FAT32, /boot;/boot/efi (esp)
8GB, linuxswap, [!mnt_point] (swap)
256GB, EXT4, / (root;system)
32GB, XFS or ZFS, /home (home)

8TB SAS HDD
/srv, still deciding on size and filesystem. Would like to use ZFS possibly
/var, still deciding on size and filesystem. Would like to use ZFS possibly


On a side note, seriously considering Docker, *podman*, or similar for containerisation, to keep things a bit more isolated and cleaner.


----------



## TopHatProductions115 (Apr 6, 2021)

Okay, finally got around to updating Technitium DNS. The newest installer, for v6, doesn't appear to allow selection of a different install location in the GUI. So I grabbed the portable installer and a copy of .NET v5 instead. Installed .NET v5 first. Then, made a .zip backup of the previous install (because reasons). Nuked everything in the DNS server folder but /config and the backup.zip. Finally, copied the new DNS server files over to the DNS server folder. Also had to register a new Windows service, since the old one does not work with the newer version. Not too difficult if I say so myself - just tedious. And I have to do the process by hand from here on, which is a bit tedious as well. May have to look into a way of automating this myself. May need to see if the DNS server can have a self-signed (CA) certificate as well.

Also waiting on a second drive cage and mini-SAS SFF-8088 to SATA forward breakout cable to arrive, so I can put the 4TB SAS HDDs to use with the Linux VM. If the DL580 G7 can handle powering 2 drive cages at once, I'll give the 16TB drive cage to the Linux VM. Use that in either a RAID10 or RAID0 (OpenZFS pool) and let nextcloud have free reign over that.

New partition setup for Artix OpenRC VM (GPT, BIOS), as of a few nights ago:

300GB SAS HDD
8MB, *unformatted*, [!mnt_point] (bios_grub)
512MB, *FAT32*, /boot (esp)
256GB, *EXT4*, / (root, system)
32GB, *EXT4*, /home (home - would like to convert to ZFS in the future)
8GB, *linuxswap*, [!mnt_point] (swap)

8TB SAS HDD
5TB, *EXT4*, /srv, (Would like to convert to ZFS in the future)
2TB, *EXT4*, /var, (Would like to convert to ZFS in the future)

Coming soon - either:

16TB (4x4TB), *ZFS RAID0*, /nextcloud
or...

8TB (4x4TB), *ZFS RAID10*, /nextcloud

Then, this:

https://hub.docker.com/_/nextcloud
https://blog.fossasia.org/deploying-yacy-with-docker-on-different-cloud-platforms/

It's all coming together now...


----------



## TopHatProductions115 (Apr 14, 2021)

The test with the 2nd drive cage installed didn't go too well tow nights ago. When connected, the drives in the 2nd cage did not appear in ESXi. In addition to this, only 3 of the HDDs from the original/first cage showed up. One of the 3 from that cage showed up intermittently. I think I may have encountered a power issue. While the activity indicators on both cages did light up, they weren't indicative of the true status of the drives. I also checked the ESXi kernel logs (Alt+F12) during runtime, and saw some interesting errors. I tried rebooting the server, to see if it needed some time to get acquainted with the new hardware. But, two reboots did nothing. Everything appears to be working as expected after removing the 2nd drive cage. If it has been a bad data cable on the 2nd drive cage, I would expect the issue to not affect the drives from the 1st cage. But perhaps I've overlooked something. Now I'm stuck at trying to figure out how to power the second drive cage, since internal power appears to be off the table for this. Perhaps an external SATA-only PSU or DC power supply?

On a different note, I also can't seem to get the WiFi NIC to show up in ESXi - which leaves me with 3 conclusions:

the card needs drivers
the card needs to be re-seated (for the 7th time)
the card is DOA and needs to be replaced
The first one seems most likely, seeing that ESXi may need drivers for anything that wouldn't be in a normal enterprise environment. The second one seems unlikely because of how many times that I've already attempted that solution. The third one is the worst case scenario, and would be one that incurs the most up-front monetary cost to me. If I do have to install drivers for the wireless NIC in ESXi, a backup of the host config needs to be made first. Otherwise, I'll be in hot water if the installation fails. I'll be attempting to use a 3rd party Linux driver in ESXi, with no way to know if it'll work in advance.

On a side note, the second drive cage was the only real way I was ever going to get to play with ZFS in Linux. That would have been a pool of four SAS HDDs that I could have experimented with, using ZFS's RAIDz options. Since that's not in the cards at the moment, the extra 16TB of SAS storage is back to sitting without a use.


----------



## TopHatProductions115 (Apr 22, 2021)

Just purchased a GXT3-2000RT120 without the batteries. Waiting for it to arrive in the mail. Then need to see if I can get it working in the next few weeks with some fresh batteries...


----------



## TopHatProductions115 (Apr 28, 2021)

The GXT3-2000RT120 arrived in the mail today in what appeared to be pristine condition this afternoon. I went on and purchased 4 batteries for it, and am now waiting for them to get here next. This Friday, I will need to purchase2 rack shelves. One will be for a new printer that was gifted to me recently (Lexmark Prevail Pro 705), in addition to the drive cage that sits on the back of the DL580 G7. The other will be for the T7500 to sit on. The UPS will end up sitting on the floor for a while, until I can get the rack mount kit for it in a few months. The new UPS will be more than capable of having all devices on the rack connected to it from what I can tell, which will save me space on the rack. Nice not needing to consider buying a second UPS. Current rack setup plan thus far:

Top sliding shelf (S1):
1-2x Kingwin MKS-435TL, 1x Lexmark Prevail Pro 705, router/AP (if applicable)

1x HPE ProLiant DL580 G7 (S2)
Mid sliding shelf (S3):
1x Kenwood 104AR

Lower sliding shelf (S4):
1x Dell Precision T7500

Bottom drawer(s - S5)
Spare parts, tools, etc.

Bottom sliding shelf (S6):
1x Liebert GXT3-2000RT120

1-2 PDUs are planned for this setup as well. Just a matter of time. The Kenwood and Liebert will not have shelves until at least later this year, due to budget constraints. The rack drawers are in the same category as of now.

On a different note, getting open-vm-tools installed onto Artix+OpenRC is proving to be a fun little challenge. Almost tempted to write my own init script for it...


----------



## TopHatProductions115 (May 1, 2021)

I bought 2 of the 4 planned shelves yesterday, from here:

https://www.ebay.com/itm/383623082900
Now I'm waiting for them to arrive in the mail. Also have UPS batteries to install today.


----------



## TopHatProductions115 (May 8, 2021)

More coming soon, once I get up the energy to power on the beast tonight

https://linustechtips.com/status/296102/


----------



## TopHatProductions115 (May 9, 2021)

The one thing that always causes trouble is when I have to fiddle with that Mini-SAS SFF-8088 to SATA breakout cable. If I have to mess with it too much, and accidentally damage it, that's another 15-20 USD down the drain. Not saying that it's inevitable, though. I simply treat the cable pretty badly at times. The last rack shelf installation may have damaged the previous cable a bit. And I have a spare cable this time, since I still can't connect the other drive cage at this time. Just means that I'm now out of spare breakout cables to trash. Next one has to come out of my paycheck. Happens about every 60 days with my luck XD Really have to look out for that...


----------



## TopHatProductions115 (May 9, 2021)

Tasks that I want to get done tonight, assuming nothing goes wrong:

Installing the nVIDIA drivers for a GRID K520
Installing a new terminal emulator (terminology)
Installing a new file manager (nemo)
Installing a browser (unGoogled Chromium)
May add it to the AD domain I have running as well
Installing docker for container management
Adding nextcloud via docker
Time to see how helpless I really am XD


----------



## TopHatProductions115 (May 9, 2021)

Okay, I blew through most of the tasks set out for today. But a few major ones still remain:

Installing the nVIDIA drivers (GRID K520)
https://wiki.archlinux.org/title/NVIDIA#Installation
https://forum.artixlinux.org/index.php/topic,1320.0.html

Joining the Artix VM to the AD domain
https://wiki.archlinux.org/title/Active_Directory_integration#Needed_Software

Installing docker
Installing nextcloud and YaCy Grid

Those all can take hours each on their own. Glad to get the other tasks out the way first, so I can have an easier time with those in a bit.


----------



## phill (May 18, 2021)

Looks like your having a load of progress here with the build, can't wait to hear it's finally up and running  

Amazing detail as well, thank you for sharing


----------



## TopHatProductions115 (May 20, 2021)

Just paused Windows Updates for one of the server client devices until early June.

Also migrated to KeePassXC. Looks like I'm using the same app on all platforms from here on, for consistency purposes - even in the VMs.

KeePass is no more...


----------



## phill (May 21, 2021)

I've had very little experience with KeePass.. But are there any differences between the two or is it just name changes?


----------



## TopHatProductions115 (May 22, 2021)

phill said:


> I've had very little experience with KeePass.. But are there any differences between the two or is it just name changes?


I used to use KeePass on Windows, and KeePassXC on all other platforms (MacOS, Linux). However, due to a recent technical issue I encountered with KeePassHTTP, I ended up ditching it in favour of KeePassXC on all platforms instead. Back to where I intended things to be, I suppose - consistency across all environments. No need to change muscle memory up when I jump from a Windows laptop to a MacBook...


----------



## phill (May 22, 2021)

I know what you mean, so many different passwords to remember and no where really trusting enough for me sometimes to put them...   If only we could do without them I guess


----------



## TopHatProductions115 (May 22, 2021)

phill said:


> I know what you mean, so many different passwords to remember and no where really trusting enough for me sometimes to put them...   If only we could do without them I guess


What I want to do is have one password DB that's accessed by KeePassXC (on all clients), synced between multiple VMs and computers by SyncThing (also on all clients), over an encrypted VPN connection (self-hosted).


----------



## TopHatProductions115 (May 24, 2021)




----------



## TopHatProductions115 (Jun 5, 2021)

Just switched jobs, and am working on a Linux VM with a GRID K520 attached. This is gonna take some time. The Linux VM did not like my last attempt to install nVIDIA drivers via pacman...


----------



## TopHatProductions115 (Jul 15, 2021)

Okay, a lot has happened since I last posted here:

It has been a long month since the la… - Linus Tech Tips
2021 Project Rack Plan Decided to do… - Linus Tech Tips
For all server and storage enthusiast… - Linus Tech Tips
Not much of a fun month or two XD Here’s to ditching the previous dry spell…


----------



## TopHatProductions115 (Jul 16, 2021)

Currently trying to improve the (m)ass storage situation :3

https://www.techpowerup.com/forums/threads/storage-enclosure-suggestions.284606/


----------



## TopHatProductions115 (Jul 17, 2021)

Time to get help...

https://forum.artixlinux.org/index.php/topic,2856.new.html


----------



## TopHatProductions115 (Jul 29, 2021)

May consider doing this once I add the Linux VM to my AD domain:

https://blog.ndk.name/linux-ssh-authentication-against-active-directory-without-joining-the-domain/

Also hoping that the disk shelf from Project Rackcentre can eliminate the need for the drive cage(s) I've been relying on for so long...


----------



## TopHatProductions115 (Jul 29, 2021)

Just made a few part swaps, due to inventory changes.

The Kingwin MKS-435TL's now belong to Project Personal Datacentre (2nd node)
The DL580 G7 now uses a Dell EMC KTN-STL3 (as shown here) for direct-attached local storage
The TPM chip for the DL580 G7 has been installed, and will be used for security purposes in the future
The DL580 G7 will actually be keeping the GTX 1080, due to lack of compatible power cables
Project Personal Datacentre (2nd node) will have the Titan Xp for the foreseeable future, until I get the PCI 8+8 cable for the DL580 G7
The Arctic F9 PWM 92mm fans have been moved to Project Personal Datacentre (2nd node) as well
Getting ready to update parts listings to reflect this in a few...


----------



## TopHatProductions115 (Aug 2, 2021)

Re-made the Linux VM, so I can do it properly this time around. Will be implementing backups this week...


----------



## TopHatProductions115 (Aug 7, 2021)

Managed to remove the need for OTP-based 2FA clients like WinAuth, and am now looking into whether I can replace Ditto (and its Linux companion) with CopyQ. Time to start looking into more cross-platform applications and sync-friendly solutions in general.

Also just enabled shared clipboard and drag-n-drop for my VMs, for easier use through VMware Workstation Pro. The steps were easy enough, and now I can work a little quicker as a result.

Only thing left to do is setup proper backups for the Linux VM, so I can safely attempt the driver install for the the GRID K520...


----------



## TopHatProductions115 (Aug 25, 2021)

The GRID K520 is a go. Sunshine gamestreaming server is next:






Then, Docker+Compose and Nextcloud...


----------



## TopHatProductions115 (Aug 26, 2021)

Sunshine gamestreaming server will take a bit to sort out:

https://forum.artixlinux.org/index.php/topic,2957.0.html
At least Docker+Compose is installed. Need to install and configure a Nextcloud instance soon...


----------



## TopHatProductions115 (Aug 31, 2021)

A second ESXi node will be coming in the 2021/2022 transition, hopefully...

https://linustechtips.com/status/302643/
Windows 11's requirements seem to have ditched 1st gen Threadripper, so no need to even consider a dedicated Windows machine in the future.


----------



## TopHatProductions115 (Sep 19, 2021)

Titan Z time!


----------



## TopHatProductions115 (Sep 19, 2021)

On a less exciting note, Docker time! 









						Unable to start Portainer Instance
					

Sorry if I didn’t categorise this topic correctly  I’m in a bit of a bind…  I’m trying to use Docker on Artix OpenRC, but am failing pretty hard.  ➜  ~ docker-compose version docker-compose version 1.29.2, build unknown docker-py version: 5.0.2 CPython version: 3.9.7 OpenSSL version: OpenSSL...




					forums.docker.com


----------



## TopHatProductions115 (Sep 21, 2021)




----------



## TopHatProductions115 (Sep 21, 2021)

Last note for the night, I've decided to replace the GRID K520 with the GTX Titan Z. Need this to work in Linux and macOS, while also being supported in Sunshine when the time comes. Can't currently do that with the GRID K520 for some reason. Need to work that out later, when the rest of the project has caught up...


----------



## TopHatProductions115 (Sep 24, 2021)

With all of the difficulty I've had getting realmd installed to Artix, I'm beginning to think that I simply shouldn't push any further with adding the VM to AD...

https://forum.artixlinux.org/index.php/topic,3066.new.html
I'll give it another 2 weeks before I make a decision.


----------



## TopHatProductions115 (Sep 25, 2021)

Background context: A few days ago, the GRID K520 was swapped out for the Titan Z. The Linux VM had half of the Titan Z passed through to it. However, no displays/dummy plugs were attached to it afterward.

An unexpected development for the Linux VM has occurred. While running through regular maintenance and installing updates, I decided to try running Sunshine once more (sudo sunshine). The logs kept mentioning permission denied for pulseaudio, whenever I attempted to remote in via Moonlight. In the past, running Sunshine without sudo had never worked - the attempt would always error out. But this time, it ran without error. 

When I attempted to remote in, I actually made it to the desktop - and it had audio passthrough. It was streaming the same video out that the VMware adapted would show. Somehow, I can stream a screen/display that isn't rendered by an nVIDIA card.

More Nextcloud/MariaDB troubleshooting tomorrow...


----------



## TopHatProductions115 (Sep 27, 2021)

Okay, that took a bit longer than expected. I had to tweak the DB for Nextcloud before I could attempt installation. But then, a mystery power outage struck. The UPS kicked in, giving me enough time for an emergency power-off procedure, but Nextcloud install got interrupted. So, had to go in and:    

remove the container(s)
remove the container(s)'s volumes and networks
clear the directories I had mounted to the container(s)
drop and recreate/reconfig the DB
re-attempt installation
It went something like this in MariaDB:


```
DROP DATABASE nextcloud;

CREATE DATABASE nextcloud;

GRANT ALL ON nextcloud.* to 'admin'@'remotehost' IDENTIFIED BY 'password' WITH GRANT OPTION;

ALTER DATABASE nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;

SET GLOBAL innodb_read_only_compressed = OFF;

FLUSH PRIVILEGES;
```
By the time I was done, hours had past 
	

	
	
		
		

		
		
	


	




 Still have to finish configuring AD/LDAP integration tomorrow...


----------



## TopHatProductions115 (Oct 3, 2021)

It would seem that there are still a few issues to address with the Docker setup on Artix OpenRC. I pretty much have to re-install docker, compose, and the openrc scripts after a full system upgrade at times:


```
sudo pacman -R docker-openrc docker-compose docker
shutdown -r 0
sudo pacman -S docker docker-openrc
sudo pacman -U docker-compose.tar.zst
sudo rc-service docker start
```

After that, everything else tends to be fine. Portainer and Redis run as if nothing has happened. However, Nextcloud is a different story. After a clean install and no other configuration changes, I get this whenever I attempt login:



> Internal Server Error
> The server was unable to complete your request.
> 
> If this happens again, please send the technical details below to the server administrator.
> ...



Also need to look into backup solutions for this setup:

https://stackoverflow.com/questions/22378777/how-to-take-container-snapshots-in-docker
Time to see if the Nextcloud community can help me figure this out:

https://help.nextcloud.com/t/unable-to-login-after-installation-and-restart/124586
And about that push for LDAP integration…


On a better note, I found a way to get Sunshine to autostart on login. Used the same solution for Syncthing and F@H as well. It’s specific to the DE that I’m using, sadly. So it won’t be useful to all Linux users following this. Only if you’re using Xfce, I think.


----------



## TopHatProductions115 (Oct 4, 2021)

More news on the Docker front:

https://forums.docker.com/t/docker-daemon-failed-to-start/116007
Also considering dropping Redis from the Nextcloud compose file, due to a suggestion...


----------



## TopHatProductions115 (Oct 5, 2021)

Yeah, the Nextcloud instance needs more work. Docker now appears to be doing fine, surprisingly (L1T project mirror):

https://help.nextcloud.com/t/unable-to-login-after-installation-and-restart/124586


----------



## TopHatProductions115 (Oct 6, 2021)

The real test begins now:








Need to try spinning up Nextcloud, to see if that error still shows up after this...


----------



## TopHatProductions115 (Oct 6, 2021)

On a side note, Portainer made me pacman -Syu tonight. Couldn't access it until I caved and did the system upgrade. Was worried that something may break.


----------



## TopHatProductions115 (Oct 8, 2021)

If you setup your ESXi and vCenter with just IP addresses initially, and added domain names after-the-fact, this may be of interest to you:

https://docs.vmware.com/en/VMware-v...UID-F46DBE63-F04E-42A1-B940-63A8F5B86ACF.html
https://kb.vmware.com/s/article/2112283
For setting that, and redoing your server's certs.


----------



## TopHatProductions115 (Oct 8, 2021)




----------



## TopHatProductions115 (Oct 14, 2021)

__
		https://www.reddit.com/r/vmware/comments/q86wdw

Light mode BURNS XD


----------



## TopHatProductions115 (Nov 11, 2021)

It has been a long month since the last update, and a lot has changed. Here's what has been completed thus far:

activated EaseUS Todo Backup Server for easier backups of Windows Server 2016
created AD integration/query users for Nextcloud, ejabberd, and FreePBX
initiated AD integration config for ejabberd
updated, broke, and revived the Artix VM
kicked F@H from the Artix VM, to re-add it as a container later on
initial planning for the move to ZFS (the entire Artix VM)
purchase the MikroTik RB4011iGS+RM
initiated Samba setup for the Artix VM
And now I'm preparing to move ejabberd to a Docker container. Gonna have to change the OP once the dust settles. Still more to announce, once things get under way...


----------



## TopHatProductions115 (Nov 11, 2021)

Just received a MikroTik RB4011iGS+RM in the mail, purchased a MikroTik CCR2004-1G-12S+2XS, and put in an offer for a MikroTik Audience RBD25GR-5HPac, to act as the wireless gateway to my serverside network. Also purchased 50x 12-24 rack screws+cage nuts and 50x 10-32 rack screws+cage nuts. That should be able to mount most of my upcoming equipment...


----------



## repman244 (Nov 11, 2021)

Any particular reason why you chose a 580 and not a DL380?


----------



## TopHatProductions115 (Nov 11, 2021)

repman244 said:


> Any particular reason why you chose a 580 and not a DL380?



Needed the PCIe slots and space for multiple GPUs and other add-in cards. I don't remember the 380 being as spacious  The DL560 would be closer, but still not quite big enough.


----------



## repman244 (Nov 11, 2021)

TopHatProductions115 said:


> Needed the PCIe slots and space for multiple GPUs and other add-in cards. I don't remember the 380 being as spacious  The DL560 would be closer, but still not quite big enough.



Makes sense, yes the 380 does not have that much space, I guess an alternative would be an ML350p G8 but if you need the processing power it's not in the same class.


----------



## TopHatProductions115 (Nov 12, 2021)

Just joined the Artix OpenRC VM to the Windows Server AD, with Samba. We're one step closer to getting the Artix VM ready for production use.

Now I need an automated way to assign the following to existing AD objects, and new ones on-the-fly:

GID (primary group ID)
UID (user's ID number)
LSH (user login shell)
UHD (users *nix home)
These RFC 2307 attributes are going to be required for single identity across the setup in the future if I go with Samba. With this, I will be able to enhance the user experience further...


----------



## ThaiTaffy (Nov 12, 2021)

TopHatProductions115 said:


> Just joined the Artix OpenRC VM to the Windows Server AD, with Samba. We're one step closer to getting the Artix VM ready for production use.
> 
> Now I need an automated way to assign the following to existing AD objects, and new ones on-the-fly:
> 
> ...


https://docs.cyberark.com/Product-D...counts|Classic Interface|Accounts Feed|_____5 
might be worth looking at.


----------



## TopHatProductions115 (Nov 13, 2021)

ThaiTaffy said:


> https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/ConfigureAutomaticProvisioning.htm?TocPath=End User|Privileged Accounts|Classic Interface|Accounts Feed|_____5
> might be worth looking at.


Need to find the downloads page for the CyberArk Vault and PSM, so I can test them out over the next few weeks.

EDIT: Need to request a demo and a quote.


----------



## ThaiTaffy (Nov 13, 2021)

Not sure what is your running but https://www.jumpserver.org/index-en.html is an opensource alternative, personally as I only run a few users I use basic Pam on my lan and then do all my external access through zerotier.


----------



## TopHatProductions115 (Nov 13, 2021)

ThaiTaffy said:


> Not sure what is your running but https://www.jumpserver.org/index-en.html is an opensource alternative, personally as I only run a few users I use basic Pam on my lan and then do all my external access through zerotier.


Currently trying to stay on-prem if possible for most of my infrastructure. Will that be something I can host myself, or will I be using an external cloud service with my AD instance?


----------



## TopHatProductions115 (Nov 13, 2021)

__
		https://www.reddit.com/r/mikrotik/comments/qt5rij

More networking coming soon


----------



## TopHatProductions115 (Nov 18, 2021)




----------



## TopHatProductions115 (Nov 20, 2021)

I got AD/LDAP integration working in Nextcloud, and got NGINX Reverse Proxy Manager working (had to use built-in DB). HTTPS and Asterisk coming next...


----------



## ThaiTaffy (Nov 21, 2021)

Https is normally pretty high on my to do list but I'm stupid and onece I move onto ngix I tend to lock myself out with some setting or other. I've just moved over to a pfsense router and home assistant running on proxmox for my 24/7 sever then Nas and anything else is on demand through wol hopefully with a more in-depth router I'll have better luck.


----------



## TopHatProductions115 (Nov 23, 2021)

More networking equipment added to the rack last night. Waiting for one more piece to arrive. Then, Verizon needs to get my service activated next month.


----------



## Solaris17 (Nov 23, 2021)

If you can go with 3cx, it was leaps ahead of asterisk. I was happy to switch to it.


----------



## TopHatProductions115 (Nov 24, 2021)

A decent PDU never hurts...


----------



## TopHatProductions115 (Nov 28, 2021)

The disk shelf is now connected to the PDU as well. PDU now handles Networking and Storage. Only items directly connected to the UPS are servers. Titan Xp will go in the DL580 G7 once the rest of the VMs are ready for 24/7. Also, Threadripper + Titan V:


----------



## TopHatProductions115 (Nov 29, 2021)

Installing ESXi 6.7 on the Threadripper is proving to be a hitch. Getting nothing but black screen whenever I attempt it. Will have to try again once I'm home...


----------



## TopHatProductions115 (Nov 30, 2021)

The new node has ESXi 6.7u3 installed. Connecting to vCenter tomorrow, when time permits. Had to swap out the SolarFlare 9021-r7 4a for a SolarFlare SFN5322F that I had sitting in the spare parts inventory. The previous NIC kept getting the server either stuck at POST code 92 (PCI init, iirc) or at SolarFlare Boot Manager screen (with no way to skip to OS). After enabling PCI passthrough on the Titan V and the swapped NIC, the new host had an issue rebooting on its own. Had to hit the Reset switch in order to get back into ESXi. Will have to look out for that if I ever add any new devices. But, adding new devices requires me to be in the same room as the server, so not that problematic. This is the first 24/7 ESXi host that I'll run.


----------



## TopHatProductions115 (Nov 30, 2021)

The new node has a name!


----------



## TopHatProductions115 (Dec 4, 2021)

The new Ethernet bridge is in, but I need some more SFP+ cables to hook it in properly...


----------



## TopHatProductions115 (Dec 6, 2021)

The plans have changed for the Linux VM. Nextcloud's built-in apps and services are making previous plans a bit redundant. ejabberd (and XMPP in general) will probably get jettisoned from the project in its entirety. The hunt for a decent PBX+SMS container solution still rages on, and interest in YouPHPTube has waned, due to a lack of potential userbase. Unless there are any people interested in starting a new video platform. YaCy_Grid may end up being the last container deployed before moving on to the macOS VM.

On a side note, it looks as though Nextcloud has SMS apps that can remove the need for Google Voice/Hangouts as well. Only missing the VoIP/PBX functionality at this point.


----------



## TopHatProductions115 (Dec 14, 2021)

I've decided to start work on the macOS VM early, to see if I can make progress elsewhere while researching the YaCy Grid container:


__
		https://www.reddit.com/r/hackintosh/comments/rfyy90


----------



## TopHatProductions115 (Dec 23, 2021)

The rack drawer is here!


----------



## TopHatProductions115 (Dec 24, 2021)

__
		https://www.reddit.com/r/hackintosh/comments/rnnqyk


__
		https://www.reddit.com/r/docker/comments/rnnzba

Getting further along...


----------



## ThaiTaffy (Dec 24, 2021)

TopHatProductions115 said:


> __
> https://www.reddit.com/r/hackintosh/comments/rnnqyk
> 
> 
> ...


Check your Maria dB isn't running from http not https pretty much everything I run is on https but some databases I know influx for sure will run http took me hours to get my graphana to work because of that doozie. Should be easy enough to confirm the host address if it's working for other clients


----------



## TopHatProductions115 (Dec 25, 2021)

ThaiTaffy said:


> Check your Maria dB isn't running from http not https pretty much everything I run is on https but some databases I know influx for sure will run http took me hours to get my graphana to work because of that doozie. Should be easy enough to confirm the host address if it's working for other clients


Found this:

https://mariadb.com/kb/en/securing-connections-for-client-and-server/
Is this what you are referring to?


----------



## TopHatProductions115 (Dec 27, 2021)

Just bought:

8x HGST HUSMM8040ASS200* MLC 400GB SSDs

* / HUSMM8040ASS201


----------



## TopHatProductions115 (Dec 28, 2021)

Moving forward with the macOS VM:

https://www.insanelymac.com/forum/topic/350155-macos-mojave-vm-on-esxi-65u3/


----------



## TopHatProductions115 (Dec 30, 2021)




----------



## TopHatProductions115 (Dec 31, 2021)

I guess this counts as multitasking?

https://help.nextcloud.com/t/configuring-nextcloud-mail-with-default-folders/130244
Reddit link that keeps auto-embedding XD
https://www.insanelymac.com/forum/topic/350155-macos-mojave-vm-on-esxi-65u3/
The Titan V is also giving me trouble on ESXi 6.7, so looks like Threadripper will have to wait. On a side note, also trying to setup a KMS server, since Windows 10 Enterprise LTSC (Titan Xp) appears to require KMS, and won't activate via Microsoft servers. Perhaps I need to log a Microsoft account into that VM sometime today...


----------



## TopHatProductions115 (Jan 1, 2022)

Resolved the Nextcloud issue, the other issues still remain. Focusing on macOS VM and KMS for now...


----------



## TopHatProductions115 (Jan 1, 2022)

Time for yet another troubleshooting thread:









						ESXi 6.7 with nVIDIA Titan V (PCI Passthrough)
					

I'm not sure if I'm posting this in the right subforum/location. I apologise in advance if this thread needs to be moved. It has to do with PCI Passthrough of an nVIDIA GPU.   I'm currently working with a Threadripper computer that has the following specs: CSE :: Rosewill RSV-L4500U (4U...




					communities.vmware.com


----------



## TopHatProductions115 (Jan 2, 2022)

More things I'm doing on the side, to streamline domain UX:



















A lot of the benefits won't work until Windows 10 is activated, sadly. So, it's all just prep work.

https://docs.microsoft.com/en-us/an...om-microsoft-accounts-to-domain-accounts.html


----------



## TopHatProductions115 (Jan 5, 2022)

So, there's good news and bad news. Good news is, the Titan Z works on the macOS VM and I got to see it in action with Remotix. The bad news is, I can't change the display resolution. Also can't format or use the raw disks that I passed to the VM. Disk Utility and diskutil appear to throw the same error(s):


```
➜  ~ diskutil list                         
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *297.9 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:                 Apple_APFS Container disk1         297.7 GB   disk0s2

/dev/disk1 (synthesized):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      APFS Container Scheme -                      +297.7 GB   disk1
                                 Physical Store disk0s2
   1:                APFS Volume Macintosh HD            45.8 GB    disk1s1
   2:                APFS Volume Preboot                 23.3 MB    disk1s2
   3:                APFS Volume Recovery                507.6 MB   disk1s3
   4:                APFS Volume VM                      20.5 KB    disk1s4

/dev/disk2 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   *4.0 TB     disk2

/dev/disk3 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                                                   *7.9 TB     disk3

➜  ~ sudo diskutil unmountdisk disk2
Password:
Unmount of all volumes on disk2 was successful
➜  ~ sudo gpt destroy disk2
gpt destroy: disk2: error: device doesn't contain a GPT
➜  ~ sudo gpt create -f disk2
➜  ~ sudo diskutil partitiondisk disk2 1 gpt apfs "SmallDisk" R
Started partitioning on disk2
Unmounting disk
Creating the partition map
Error: 5: Input/output error
➜  ~
```


----------



## TopHatProductions115 (Jan 5, 2022)




----------



## TopHatProductions115 (Jan 5, 2022)

I decided to give Paragon's software (https://www.paragon-software.com/hdm-mac/) a trial tonight. First, had it initialise the HDDs, GPT. Then, created HFS+ volumes (since Paragon's software couldn't make APFS volumes). Then, had Disk Utility convert the HFS+ volumes to APFS volumes. Now the raw disks appear to work! Coming back tomorrow to start installing PleX...


----------



## TopHatProductions115 (Jan 5, 2022)




----------



## TopHatProductions115 (Jan 6, 2022)

Considering virtual audio devices for both Linux and macOS, since the GPU's audio device didn't work well with PCI Passthrough. Still need to install DaVinci Resolve to macOS and move the previous PleX Media Library to macOS. Then, focusing on xBrowserSync and NGINX Revers Proxy Manager. YouPHPTube will undergo a final validation/testing phase after this. If the MariaDB issue can't be resolved, I'll just rely on LBRY and other existing alternatives and shift focus to YaCy Grid as the final Docker container instance. Still need to setup Windows 10 Enterprise VM (Titan Xp) for daily use. Also still troubleshooting the Titan V issue...

https://linustechtips.com/status/313723/


----------



## TopHatProductions115 (Jan 11, 2022)

I've decided to merge the Win10 and Remote Dev VMs. I've worked in this environment before, and it hasn't been an issue for me in the past. Saves time and resources in my case. Also, the macOS VM appears to be nearly ready for prime time...


----------



## TopHatProductions115 (Jan 12, 2022)

Getting ready to test Windows 10 Enterprise VM with this enabled:

https://kb.vmware.com/s/article/1033435


----------



## TopHatProductions115 (Jan 13, 2022)

Finished initial setup for xBrowserSync last night. Now looking to attack the last container - YaCy...

*YouPHPTube is being delayed until the rest of the server project is finished, to avoid unnecessary delays.


----------



## TopHatProductions115 (Jan 16, 2022)

Welp, here are the major changes for the project thus far:

YouPHPTube is probably getting jettisoned, in favour of using LBRY instead
YaCy Grid is going to be a long-term experiment, and has to be built from source
xBrowserSync is officially part of the project now - and it's got more on the way!
PleX has Movie and Music streaming, but doesn't have external LDAP integration
added Trakt.tv plugin (scrobbler) for additional functionality

Would like to add eBooks management to Nextcloud, but may need a new container instead
Azure Active Directory was added, to enable possible MFA in the future
The majority of the server project is complete - looking into 24/7 testing soon...


----------



## TopHatProductions115 (Jan 17, 2022)

Why couldn't drive cloning be easy  


__
		https://www.reddit.com/r/HomeDataCenter/comments/s6idjv


----------



## TopHatProductions115 (Jan 20, 2022)

The task list has been updated:

https://linustechtips.com/status/314520/


----------



## TopHatProductions115 (Jan 23, 2022)

Okay, you guys may get a laugh from this. I was remoted into the Windows 10 Enterprise VM (equipped with a Titan Xp). I installed 3DMark, thinking I was gonna do some benchmark runs today. Instead, the entire ESXi host rebooted. All of the VMs went offline, and I'm waiting to see if the server comes back from this in one piece. Hoping this also doesn't rule out just gaming in general. If the power draw from the Titan Xp is too much, I may have to consider other options...


----------



## TopHatProductions115 (Jan 24, 2022)

Hmm...





__





						Questions About SanDisk Fusion IoScale 3.2TB PCIe SSD (F11-002-3T20-CS-0001)
					

As the title suggests, I have a few questions about the Fusion IoScale 3.2TB SSD:  Is this device supported in ESXi 6.5u3 and/or 6.7u3 ? Will this device work without installing additional drivers ?  If not, where should I go to download the required drivers ?  What level of performance should I...




					forums.servethehome.com


----------



## TopHatProductions115 (Jan 26, 2022)

The job is never finished:

https://linustechtips.com/status/314805/
I march onward...


----------



## TopHatProductions115 (Jan 27, 2022)

Imma stop here for now...

https://linustechtips.com/status/314861/
That's a lot :|


----------



## TopHatProductions115 (Jan 29, 2022)

What I've managed to get done thus far:

install a 2nd PCIe SSD
install a USB adapter card
plug in a 2nd PSU
buy 128GB of RAM
decommission Threadripper
Off to a rough week. Still looking for a good PCIe enclosure...


----------



## TopHatProductions115 (Jan 30, 2022)

First benchmarks!





__





						My first successful benchmark run - a…
					

My first successful benchmark run - and the server didn't reboot during the run!




					linustechtips.com


----------



## TopHatProductions115 (Feb 10, 2022)

Just purchased a Magma EB7-X8G2-RAS-F (7-slot PCIe 2.0 expansion enclosure), with what appears to be a two x8 ports on its expansion interface. I will most likely need to acquire the following next:

x16 interface card
x16 host adapter
x16 PCIe cable

This is going to be a long one...


----------



## TopHatProductions115 (Feb 10, 2022)

The PCIe x16 Host adapter and PCIe x16 cable arrived today! Pictures in a few...

EDIT: There's 128GB more RAM on the way as well. And guess what's going in the enclosure?...


----------



## TopHatProductions115 (Feb 11, 2022)

It's finally starting to warm up where I'm at. But that also means I can't run F@H anymore, due to thermal reasonsUntil next Autumn, the folding will be paused.


----------



## TopHatProductions115 (Feb 12, 2022)

Okay, things have changed. From what I've learned recently, the enclosure I have is only officially compatible with an x8 Interface card. The hardware for x16 is way more expensive, and I don't know if I wanna buy it, only to find out the enclosure itself was only wired for x8. So, I've decided to go x8 for the time being. Here's what I have so far:

Magma EB7-X8G2-RAS-F
Magma PEHIFX8G2
Molex  iPass 74546-0801
OSS-PCIe-HIB25-x8
Other parts, that will work with an x16 enclosure:

Molex iPass 46K3726
OSS-PCIe-HIB25-x16
Still need to find the P/N's for the PSU cables, since the Titan V uses 8-pin and 6-pin power connectors iirc...


----------



## TopHatProductions115 (Feb 13, 2022)

The spec sheet for the server has changed, in anticipation of the PCIe enclosure that arrived recently. Still waiting for a few more components to arrive in the mail, but this is the way for ward from here. Some parts have been moved from the DL580 G7, to the enclosure, to free up space and reduce power draw. Certain parts that don't receive regular use don't need to be in the DL580 G7 necessarily. Optional parts will be in the enclosure instead. The server has received a RAM upgrade as well, from 128GB to 256GB. The SAS HDD-to-SSD cloning operation will occur after the migration from ESXi 6.5 to 6.7. If you have any questions, feel free to ask!


----------



## dogwitch (Feb 16, 2022)

TopHatProductions115 said:


> The spec sheet for the server has changed, in anticipation of the PCIe enclosure that arrived recently. Still waiting for a few more components to arrive in the mail, but this is the way for ward from here. Some parts have been moved from the DL580 G7, to the enclosure, to free up space and reduce power draw. Certain parts that don't receive regular use don't need to be in the DL580 G7 necessarily. Optional parts will be in the enclosure instead. The server has received a RAM upgrade as well, from 128GB to 256GB. The SAS HDD-to-SSD cloning operation will occur after the migration from ESXi 6.5 to 6.7. If you have any questions, feel free to ask!


what power draw? and what was it before?


----------



## TopHatProductions115 (Feb 16, 2022)

dogwitch said:


> what power draw? and what was it before?


The PCIe SSDs were the primary concern, but I doubt the resulting change will be that big. The server idled anywhere between 500 and 600W without F@H running. While folding, power sat at 650-750W (medium folding power). Will have to measure the metrics again later today.

Also had to make some major changes, due to a Linux update - Folding@Home is off the table for the foreseeable future:





__





						The last pacman -Syu kinda blew out t…
					

The last pacman -Syu kinda blew out the legacy drivers for the Titan Z, something to do with GCC and plugins. Only issue is, I don't have a backup for before the update, and it's the only thing that stopped working. In general, this VM has been meh handling GPU support. So, I'm tempted to just pu...




					linustechtips.com


----------



## dogwitch (Feb 17, 2022)

TopHatProductions115 said:


> The PCIe SSDs were the primary concern, but I doubt the resulting change will be that big. The server idled anywhere between 500 and 600W without F@H running. While folding, power sat at 650-750W (medium folding power). Will have to measure the metrics again later today.
> 
> Also had to make some major changes, due to a Linux update - Folding@Home is off the table for the foreseeable future:
> 
> ...


dam my 32 core(in profile spec) idle at 250 to 300. folding its north of 800


----------



## TopHatProductions115 (Feb 17, 2022)

dogwitch said:


> dam my 32 core(in profile spec) idle at 250 to 300. folding its north of 800


I almost had a Threadripper as well, but that dream ended when ESXi didn't play well on it. Ended up cancelling that part and sticking to my current server


----------



## dogwitch (Feb 17, 2022)

TopHatProductions115 said:


> I almost had a Threadripper as well, but that dream ended when ESXi didn't play well on it. Ended up cancelling that part and sticking to my current server


ah. yeah am using more basic vm software. have to re learn it all. due to older hardware i had. i spent way to much and asus screw me over it...


----------



## TopHatProductions115 (Feb 17, 2022)

I've been forced to hold off on the OpenStreetMaps backend (routing) container. This was due to the insane memory usage, which appears to have been what crashed my once-stable Linux VM. I'd need to move beyond 32GB RAM for that one VM, which would be pretty crazy. The ESXi host only has 256GB RAM, with all slots filled (4GB sticks). To move beyond that would cost me a fortune, buying 8GB and 16GB sticks off the used market. The current market does not lend itself to that errand too easily. I'll focus on just YaCy Grid for the time being.


----------



## TopHatProductions115 (Feb 27, 2022)

The PCIe enclosure is being removed from the project. Unable to get it working, and OEM/ODM won't communicate to assist with troubleshooting. No way to justify keeping it in the rack at this point.


----------



## TopHatProductions115 (Mar 6, 2022)

The current ToDo list:


```
**Current ToDo's:**
- Windows Server 2016:
        - UNIX/POSIX attributes in AD
                - <https://github.com/wruppelx/win2016setuid>
- Artix OpenRC:
        - Docker container: YaCy Grid
                - <https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/>
                - <https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml>
                - initiate web crawl
- Windows 10 Enterprise:
        - Gaming VM troubleshooting (<https://www.reddit.com/r/VFIO/>)

**Upcoming ToDo's:**
- Server/Networking:
        - migrate from ESXi 6.5 to 6.7 **

**Long-term ToDo's:**
- Server/Networking:
        - clone HDDs to SAS SSDs
                - Acronis True Image
        - VDI host when?
                - pushed to 2023, due to performance requirements
        - DL580 Gen8/9 planning...
```


----------



## TopHatProductions115 (Mar 7, 2022)

I've finally managed to setup wireless Time Machine backups for the MacBook. Next will be the EliteBook, if I can figure out how to do so. In addition to the other tasks I have in front of me...

https://512pixels.net/2018/08/how-to-set-up-time-machine-server/


----------



## TopHatProductions115 (Mar 9, 2022)

```
Current ToDo's:
 - Windows Server 2016:
        - UNIX/POSIX attributes in AD
                - https://github.com/wruppelx/win2016setuid
 - Windows 10 Enterprise:
        - Gaming VM troubleshooting (https://www.reddit.com/r/VFIO/)

Upcoming ToDo's:
 - Server/Networking:
        - purchase/activate EaseUS ToDo Backup Center
        - purchase/activate OnlyOffice server license
        - migrate from ESXi 6.5 to 6.7 **

Long-term ToDo's:
 - Server/Networking:
        - clone HDDs to SAS SSDs
                - Acronis True Image
        - get a GitHub point-of-contact
        - VDI host when?
                - pushed to 2023, due to performance requirements
        - DL580 Gen8/9 planning...
 - Artix OpenRC:
        - Docker container: YaCy Grid
                - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
                - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
                - initiate web crawl
```


----------



## Solaris17 (Mar 9, 2022)

TopHatProductions115 said:


> - purchase/activate EaseUS ToDo Backup Center



Love these products in general, have had a lot of success with their recovery soft too.


----------



## TopHatProductions115 (Mar 11, 2022)

Purchasing EaseUS ToDo Backup Center next week, so I can actually backup all Windows machines on my domain. That's a priority for this project. The other licenses can wait. I have dodgy Windows Updates to prepare for...


----------



## TopHatProductions115 (Mar 12, 2022)

Best part about snow days and days below 40 F? I get free AC for my server  Not that most people would suggest it, but I'm not allowed to run AC in that room when the server's on, sadly. So, I take what I can get. May be better than overheating the poor thing and frying something...


----------



## TopHatProductions115 (Mar 20, 2022)

New List:

https://linustechtips.com/status/317796/


----------



## TopHatProductions115 (Mar 22, 2022)

In my quest to deck out Windows Server with every customisation I can throw at it, I’m going GPO crazy!












Windows Defender GPO is next...


----------



## TopHatProductions115 (Mar 23, 2022)

Aaand ran into an error. Can’t win:









						Unable to List Datastores with ESXCLI
					

I powered on an ESXi host this morning, and one of the VMs didn't boot successfully. After checking the web UI, I found that one of my datastores had gone missing. I checked the front of the server, and saw that one of my disks were red, instead of green. It may have come a bit loose. I took the...




					communities.vmware.com


----------



## ThaiTaffy (Mar 23, 2022)

TopHatProductions115 said:


> Best part about snow days and days below 40 F? I get free AC for my server  Not that most people would suggest it, but I'm not allowed to run AC in that room when the server's on, sadly. So, I take what I can get. May be better than overheating the poor thing and frying something...


It's currently 40°c + here and my poor server is cooking, I'm in the process of building a server rack in the center of the house near a large mass of concrete, hoping it works as a temperature regulator. 

Just switched up my server some also, moved from pfsense to OpnSense for my router ( almost the same but I use the opnsense plugins far more regularly and the licences seemed to be going down a dark path on pfsense).
In the process of getting rid of my omada SDN controller  with the recent tp-link news and the fact if someone's selling something of mine I want a cut. So moving over to OpenWRT but having to do testing for a developer as my router isn't currently on the supported hardware list(should be a blast I'm sure).

Anyway keep up the good work and good luck with your backups.


----------



## TopHatProductions115 (Mar 23, 2022)

Rebooted the server and took another look at the screen for some clues, got this:





Re-enabled the logical drive and let ESXi go at it.


----------



## dogwitch (Mar 24, 2022)

TopHatProductions115 said:


> Rebooted the server and took another look at the screen for some clues, got this:
> 
> View attachment 241017
> 
> Re-enabled the logical drive and let ESXi go at it.


let us know what happens... this is giving me bug update raid fail flash back  from feb this year...


----------



## TopHatProductions115 (Mar 24, 2022)

Good news, it looks like the issue may have been resolved. For full details, go here:






						Re: Unable to List Datastores with ESXCLI
					

Hello. From the information sent we have: The recognized filesystems are all mounted, i.e. you have access to them. There are no snapshots on the ESXi host. I see an IBM XIV external storage of which you have several LUNs assigned to the ESXi host. You have several HP disks connected and...




					communities.vmware.com


----------



## dogwitch (Mar 24, 2022)

TopHatProductions115 said:


> Good news, it looks like the issue may have been resolved. For full details, go here:
> 
> 
> 
> ...


good news. no data lost.. sadly for me to late on updat(that i ref  ). i learn a lesson and doing more like your doing with back ups.


----------



## TopHatProductions115 (Mar 26, 2022)

```
Current ToDo's:
 - Windows 10 Enterprise:
        - VM gaming troubleshooting
            - <https://www.reddit.com/r/VFIO/>

Upcoming ToDo's:
 - Artix OpenRC:
    - Docker container: Nextcloud
        - add redis caching??
 - Server/Networking:
    - purchase OnlyOffice server license
 - macOS Mojave:
    - Get a MacPorts point-of-contact
    - Homebrew => MacPorts
    - upgrade to Big Sur 11.2.3

Long-term ToDo's:
 - Server/Networking:
    - migrate from ESXi 6.5 to 6.7
    - clone HDDs to SAS SSDs
    - VDI host when?
        - pushed to 2024/2025, due to performance requirements
        - DL580 Gen8/9 planning?!
    - Get a GitHub point-of-contact
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - <https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/>
        - <https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml>
        - initiate web crawl
```


----------



## claes (Mar 26, 2022)

Why the move from homebrew to macports?


----------



## TopHatProductions115 (Mar 26, 2022)

claes said:


> Why the move from homebrew to macports?


Ever since late last year, I began having issues with Homebrew that cost me time on the rest of the server project. It hit my MacBook first, so I ended up holding off on certain tasks when I finally got the macOS VM running. The first issue was with package updates. For some reason, Homebrew installed packages without checking macOS version compatibility. The result was that many of my commonly-used apps were suddenly unusable. Since the MacBook is currently stuck on Mojave, and can't move to Big Sur yet, I needed to rollback app/package versions. And since Homebrew doesn't have a native way do so, I ended up having to so by hand. I had to dig around GitHub issue postings for a week to figure that out. It was not a pleasant experience, and I used to use MacPorts before Homebrew. I only picked up Homebrew because MacPorts didn't have a package I needed years ago.


----------



## TopHatProductions115 (Mar 27, 2022)

Updated after last night’s tasks, more changes pending…

```
Current ToDo's:
 - macOS Mojave:
    - install Xcode 11.3.1 (MacPorts)
 - Artix OpenRC:
    - Docker container: Nextcloud
        - add redis caching??

Upcoming ToDo's:
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
 - Server/Networking:
    - purchase OnlyOffice server license
 - Windows 10 Enterprise:
    - purchase Adobe Acrobat Pro 2022

Long-term ToDo's:
 - Server/Networking:
    - migrate from ESXi 6.5 to 6.7
    - clone HDDs to SAS SSDs
    - VDI host when?
        - pushed to 2024/2025, due to performance requirements
        - DL580 Gen8/9 planning?!
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - <https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/>
        - <https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml>
        - initiate web crawl
```


----------



## TopHatProductions115 (Mar 28, 2022)

I managed to move to MacPorts, and got Xcode installed. However, python39 is unavailable for Mojave from what I'm seeing. That affects 2 packages that I use regularly. The fix for that will have to wait until I update the VM to Big Sur.


----------



## TopHatProductions115 (Apr 1, 2022)

Updated Plans:

```
Current ToDo's:
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
 - Artix OpenRC:
    - Docker container: Nextcloud
        - purchase OnlyOffice server license
        - add redis caching??
 - Windows 10 Enterprise:
    - purchase Adobe Acrobat Pro 2022
    - purchase DaVinci Resolve license

Upcoming ToDo's:
 - Server/Networking:
    - migrate from ESXi 6.5 to 6.7
    - AMD GPU shopping (Linux/macOS)
        - https://www.reddit.com/r/realAMD/comments/tt2hq4/in_search_of_a_gpu/

Long-term ToDo's:
 - Server/Networking:
    - clone HDDs to SAS SSDs
    - VDI host when (DL580 Gen8/9 planning)?
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
```


----------



## TopHatProductions115 (Apr 2, 2022)

The Titan Z is getting long-in-tooth and no longer receives driver updates. Want macOS and Linux to share a GPU again? New day, new problems to solve:


__
		https://www.reddit.com/r/realAMD/comments/tt2hq4









						AMD Radeon Pro v320/v340
					

I have a few questions pertaining to these cards.  v320 (v520 variant):  Does it support SR-IOV or MxGPU?  Can it be split amongst multiple VMs?   v340 (2x Vega 56's):  Does this card have MxGPU configuration(s) compatible with macOS?  Both cards:  Are they fully supported in macOS?    I would...




					www.techpowerup.com


----------



## TopHatProductions115 (Apr 4, 2022)

Just moved to OnlyOffice Document Server EE (Enterprise Edition). Redis cache is next. If the Radeon Pro v320 can't be split between multiple VMs, macOS gets dibs. I can't see myself messing with GPU stuff in Linux again. Getting that to work with nVIDIA drivers was a bit of a pain. By the time I ever try again with another multi-die GPU, it'll hopefully be RDNA-based and on a server running something newer than Intel Westmere. The show must go on, and one VM can't absorb all of my time when other tasks await...


----------



## TopHatProductions115 (Apr 5, 2022)

Future planning begins now:

https://linustechtips.com/status/318598/


----------



## TopHatProductions115 (Apr 5, 2022)

Added redis container to docker, testing with nextcloud. The list is changing, slowly:


```
Current ToDo's:
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
 - Windows 10 Enterprise:
    - purchase Adobe Acrobat Pro 2022
    - purchase DaVinci Resolve license

Upcoming ToDo's:
 - Server/Networking:
    - migrate from ESXi 6.5 to 6.7

Long-term ToDo's:
 - Server/Networking:
    - convert the VMs (MBR => GPT, BIOS => UEFI)
    - clone HDDs to SAS SSDs
    - VDI host when (DL580 Gen8/9 planning)?
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - <https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/>
        - <https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml>
        - initiate web crawl
```


----------



## TopHatProductions115 (Apr 6, 2022)

AMD Radeon Pro v320/v340 Questions
					

I have a few questions pertaining to these Vega-based cards.  v320 (looks like down-clocked Vega 64):   Does it support SR-IOV or MxGPU? Can it be split amongst multiple VMs?  v340 (2x Vega 56’s):   Does this card have MxGPU configuration(s) compatible with macOS?  Both cards:   Are they fully...




					forum.level1techs.com


----------



## dogwitch (Apr 9, 2022)

something you never mention(unless i missed it)
is what type of switch are you using for this whole set up?


----------



## TopHatProductions115 (Apr 11, 2022)

dogwitch said:


> something you never mention(unless i missed it)
> is what type of switch are you using for this whole set up?



I mentioned that info on a different site:

https://linustechtips.com/status/285142/


----------



## TopHatProductions115 (Apr 11, 2022)

Okay, things have definitely taken a different turn than expected. As I learned in the 2021/2022 transition, certain cards that require Above 4G MMIO (Above 4G decoding) are off-limits for me as long as I'm using the DL580 G7. That was part of why I couldn't use the Tesla K80's in late 2021. Well, that same situation happened with the Radeon Pro v320, and I have no idea if the same would apply if I had gotten the v340 instead. I got the help of a friend to look for firmware updates, to see if the feature had possibly been introduced in later firmware versions. No such luck. Removing PCIe SSD's also didn't help. Sad part is, it looks as though the DL980 G7 didn't have this limitation. As a result, I will have to push updating the macOS VM to Big Sur further out - until I can get a DL580 Gen8/9 in-house.

I also had difficulty updating software for the secondary DNS (software by Technitium), so I'm going to have to contact them to figure out how to proceed. The handy installer stopped working for me a while back, so I've had to perform all updates by hand since mid-2021 iirc.

Hopefully, it won't be too long before I can get back on track with finishing the rest of the server project, because I miss actually getting tasks done...


```
Current ToDo's:
 - Windows Server 2016:
    - Contact Technitium (unable to update DNS software)
 - Server/Networking:
    - convert the VMs (MBR => GPT, BIOS => UEFI)
    - clone SAS HDDs to SAS SSDs
    - migrate from ESXi 6.5 to 6.7

Upcoming ToDo's:
 - Windows 10 Enterprise:
    - purchase Adobe Acrobat Pro 2022

Long-term ToDo's:
 - Server/Networking:
    - VDI host when (DL580 Gen8/9 planning)?
    - replace Titan Z with Radeon Pro v320/v340
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode

Unconfirmed ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
```

oof...


----------



## dogwitch (Apr 11, 2022)

TopHatProductions115 said:


> I mentioned that info on a different site:
> 
> https://linustechtips.com/status/285142/


thank you and book mark all three of them for future reading etc.


----------



## TopHatProductions115 (Apr 12, 2022)

It's getting close to that time again - where I have to look into cloning drives for the server again. And we all know how it went last time


----------



## TopHatProductions115 (Apr 16, 2022)

Updated ToDo List, once more (since I just gave myself more work - funny how that works):


```
Current ToDo's:
 - Windows Server 2016:
    - FreeSWITCH (vPBX) configuration
        - https://freeswitch.org/confluence/display/FREESWITCH/XML+Switch+Configuration
        - https://freeswitch.org/confluence/display/FREESWITCH/Directory
        - https://freeswitch.org/confluence/display/FREESWITCH/mod_ldap
        - https://freeswitch.org/confluence/display/FREESWITCH/mod_voicemail
        - https://freeswitch.org/confluence/display/FREESWITCH/mod_sms

Upcoming ToDo's:
 - Server/Networking:
    - convert the VMs (MBR => GPT, BIOS => UEFI)
    - clone SAS HDDs to SAS SSDs (Storage vMotion?)
    - migrate from ESXi 6.5 to 6.7

Long-term ToDo's:
 - Server/Networking:
    - VDI host when (DL580 Gen8/9 planning)?
    - replace Titan Z with Radeon Pro v320/v340
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe)?

Unconfirmed ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
```

I might never be finished. On the other hand, the original plan did include a VoIP/PBX. I guess this is where it happens...


----------



## TopHatProductions115 (Apr 17, 2022)

Just kicked FreeSWITCH for izPBX, and actually got izPBX working this time. That was pretty tedious to get working. Now to see if it can survive a container/VM reboot. Last time, it didn't go so well...


----------



## TopHatProductions115 (Apr 21, 2022)

__
		https://www.reddit.com/r/freepbx/comments/u8rdaw


----------



## TopHatProductions115 (Apr 23, 2022)

FreePBX has been jettisoned from the project, permanently:

https://community.freepbx.org/t/unable-to-make-receive-calls/82823
Getting ready to close out the SIP trunks in a few hours...


----------



## TopHatProductions115 (Apr 23, 2022)

New ToDo List, adjusted for final removal of telecommunications from the project:

```
Current ToDo's:
 - Server/Networking:
    - convert the VMs (MBR => GPT, BIOS => UEFI)
    - clone SAS HDDs to SAS SSDs (Storage vMotion?)
    - migrate from ESXi 6.5 to 6.7

Upcoming ToDo's:
 - Server/Networking:
    - VDI host when (DL580 Gen8/9 planning)?
    - replace Titan Z with Radeon Pro v320/v340 *
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe)?

Long-term ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
```


----------



## TopHatProductions115 (Apr 29, 2022)

Currently looking to see if I can get Cisco CUCM. Might not be likely, but it's my last attempt at managing a phone system on-prem. Otherwise, I won't be looking into it again until the next version of this project...


----------



## TopHatProductions115 (May 2, 2022)

Reorganised ToDo's to reflect current priorities and project direction:


```
Current ToDo's:
 - Server/Networking:
    - convert Windows VMs (MBR => GPT, BIOS => UEFI)
    - migrate from vSphere 6.5 to 6.7 (ESXi)
    - VDI host when (DL580 Gen8/9 planning) ?

Upcoming ToDo's:
 - Server/Networking:
    - purchase HPE ProLiant DL580 Gen8
    - replace Titan Z with Radeon Pro v340
    - Move VMs to new ESXi host (Storage vMotion)
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe) ?

Long-term ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
 - Install Arch (OpenRC+ZFS on UEFI) from scratch
    - move all Docker containers to new Arch host
```

Yep - planning on possibly moving from Artix to pure Arch, to see if I can bake in the ZFS support I've been wanting this whole time. Haven't found anything (yet) that allows for easy conversion from other filesystems to ZFS, so might be easier to go with ZFS from the beginning in any case. Perhaps My Timeshift backups will be enhanced by this as well...


----------



## TopHatProductions115 (May 6, 2022)

Last ToDo List of the Week, too tired...

```
Current ToDo's:
 - Cisco CUCM
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl

Upcoming ToDo's:
 - Server/Networking:
    - convert Windows VMs (MBR => GPT, BIOS => UEFI)
    - migrate from vSphere 6.5 to 6.7 (ESXi)

Long-term ToDo's:
 - Server/Networking:
    - VDI host when (DL580 Gen8/9 planning)
    - purchase HPE ProLiant DL580 Gen8
    - replace Titan Z with Radeon Pro v340
    - Move VMs to new ESXi host (Storage vMotion)
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe) ?
 - Artix OpenRC Reborn
    - Reinstall on UEFI, w/ OpenZFS from the start
    - New partition scheme (see below)
```

Partition scheme:

```
400GB SAS SSD
    - _ESP   512MB, FAT32, /efi         (esp)
    - root   320GB,  EXT4, /            (root,system)
    - home    64GB,  EXT4, /home        (home)
    - swap    16GB,  swap, [!mnt_point] (swap)
8TB SAS HDD
    - services 2TB,   ZFS, /srv         (srv)
    - variable 5TB,   ZFS, /var         (var)
8TB SAS HDD
    - backup   8TB,   ZFS, [!mnt_point] [!flag]
```


ZFS on root partition is off-limits until this is resolved:

https://forum.level1techs.com/t/how...er-filesystem/184344/8?u=tophatproductions115
On a side note:

https://forums.servethehome.com/index.php?threads/linux-vs-unix-which-do-you-prefer-and-why.36333/
https://community.hpe.com/t5/ProLia...bove-4G-Decode-Memory-Mapped-I-O/td-p/7165573
Still wondering if I should test out a BSD VM one day, once I move to a newer host...


----------



## TopHatProductions115 (May 16, 2022)

Last ToDo List of the Week, too tired _*EDITED*_...

```
Current ToDo's:
 - Cisco CUCM demo/pricing
 - Windows Server 2016:
    - convert (MBR => GPT, BIOS => UEFI)
    - Upgrade from 2016 to 2019 (friggin' update times)
 - Artix OpenRC:
    - Docker container: Tor node(s)/relay(s)
    - Docker container: Discord bridge (matterbridge)
        - https://nextcloud.com/blog/bridging-chat-services-in-talk/
 - Windows 10 Enterprise:
    - convert (MBR => GPT, BIOS => UEFI)
 - Server/Networking:
    - migrate from vSphere 6.5 to 6.7 (ESXi)

Upcoming ToDo's:
 - Server/Networking:
    - purchase HPE ProLiant DL580 Gen8
    - VDI host when (DL580 Gen8/9 planning)
    - replace Titan Z with Radeon Pro v340
    - Move VMs to new ESXi host (Storage vMotion)
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe) ?

Long-term ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
 - Install Arch (OpenRC+ZFS on UEFI) from scratch
    - move all Docker containers to new Arch host
    - or convert existing VM ?!
    - New partition scheme (see below)
```

Partition/Volume arrangement:

```
400GB SAS SSD
    - _ESP   512MB, FAT32, /efi         (esp)
    - root   320GB,  EXT4, /            (root,system)
    - home    64GB,  EXT4, /home        (home)
    - swap    16GB,  swap, [!mnt_point] (swap)
8TB SAS HDD
    - services 2TB,   ZFS, /srv         (srv)
    - variable 5TB,   ZFS, /var         (var)
8TB SAS HDD
    - backup   8TB,   ZFS, [!mnt_point] [!flag]
```


ZFS on root partition is off-limits until this is resolved:

https://forum.level1techs.com/t/how...er-filesystem/184344/8?u=tophatproductions115

On a side note:

https://forums.servethehome.com/index.php?threads/linux-vs-unix-which-do-you-prefer-and-why.36333/
https://community.hpe.com/t5/ProLia...bove-4G-Decode-Memory-Mapped-I-O/td-p/7165573

Still wondering if I should test out a BSD VM one day, once I move to a newer host. Finances are currently very tight, and I still need to purchase a few more EaseUS licenses. The CUCM idea may have to be put  off for a while, as I try to figure out the rest of the objectives here. Installing Windows 10 Enterprise on the Threadripper has proven to be more difficult than originally expected. Also considering switching cell service providers in the next 30 days. Life comes at ya fast...


----------



## TopHatProductions115 (May 30, 2022)

2022 is proving to be a tougher year than the previous one, when it comes to getting major tasks done. I may end up focusing more on the Docker host for a while, since that’s where I’ll be able to make the most progress without breaking the bank. Gotta take a slower pace, to determine a few alternative routes for some of these tasks…


```
Current ToDo's:
 - Cisco CUCM demo/pricing (on hold, due to finances)
 - Artix OpenRC:
    - Docker container: Discord bridge (matterbridge)
        - https://nextcloud.com/blog/bridging-chat-services-in-talk/
    - Docker container: YaCy Grid
        - https://blog.fossasia.org/creating-a-dockerfile-for-yacy-grid-mcp/
        - https://github.com/yacy/yacy_grid_mcp/blob/master/docker/all-in-one/docker-compose.yml
        - initiate web crawl
    - Docker container: Tor node(s)/relay(s)
 - Windows 10 Enterprise:
    - convert (MBR => GPT, BIOS => UEFI)

Upcoming ToDo's:
 - Windows Server 2016:
    - convert (MBR => GPT, BIOS => UEFI) w/ AOMEI license
    - Upgrade from 2016 to 2019 (friggin' update times)
 - Server/Networking:
    - migrate from vSphere 6.5 to 6.7 (ESXi)

Long-term ToDo's:
 - Server/Networking:
    - purchase HPE ProLiant DL580 Gen8
    - VDI host when (DL580 Gen8/9 planning)
    - replace Titan Z with Radeon Pro v320/v340
    - Move VMs to new ESXi host (Storage vMotion)
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
    - sNTP client configuration (maybe) ?
 - Install Arch (OpenRC+ZFS on UEFI) from scratch
    - move all Docker containers to new Arch host
    - or convert existing VM ?!
```

Finances are still tight, and converting Windows Server from MBR to GPT will cost money since Microsoft left that conversion tool out on Windows Server. AOMIE software license will need to be purchased as a result. Can’t purchase the previously planned EaseUS Backup licenses, due to other expenses that came up recently. I’m at least 2 paychecks (a full month) behind on these acquisitions, and I’m still trying to navigate talks with Cisco, pertaining to CUCM. I’m gonna try to install Windows 10 Enterprise on the Threadripper one last time, sometime in June. Here’s to hoping things lighten up later, maybe in the Autumn…


----------



## TopHatProductions115 (Jun 12, 2022)

Boy, things have gotten pretty weird lately. About halfway through May, I learned that I'll need more licenses for EaseUS (to manage Windows backups). Also had to delay a demo meeting with Cisco, pertaining to CUCM/WebEx. Wanted to add Tor nodes and Matterbridge to the project (as Docker containers), only for the potential risks of the former and complications (between Discord and Nextcloud chat) of the latter to dissuade me on the limited value/returns I'd receive. Days later (05/15), Windows 10 Enterprise refused to install on the Threadripper. Slightly after that (05/19), I find out that Ms left their free MBR-to-GPT conversion tool out of Windows Server 2016 - necessitating the purchase of another software license (AOMIE). On 05/21 (midnight), I moved to Nextcloud 22.2.8, in preparation for the move to version 23. On that same day, the air quality was so bad that I couldn't run the server later. It stayed off until the 26th iirc. During that time, I re-realised that I'll need a rackmount AC at some point in the future (had already looked into it before). Between May 27 and 29, I was updating Portainer and purchasing my current domain from Freenom, outright. Also considered hosting an Invidious instance. During the May/June transition, I had to move my mobile devices from StraightTalk to T-Mobile, because the former decided to drop support for the Asus ZenFone 6. They also decided that their VoLTE implementation didn't need to support said device. And they had a nerve to waste my time (days of it) with poor customer service as I tried and failed to transfer my phone number to T-Mobile (to go easy on people's contacts lists). Needless to say, Google Voice is the only reason that didn't end up becoming a complete mess. I stopped giving out my real phone number a while ago, because telephone companies don't have to play nice. At least T-Mobile has better customer service from what I've seen. On June 2nd, I was supposed to look into group policy to reduce Windows 10 telemetry - but got sidestepped by a potential File Explorer bug! On June 4th, I did more research on the requirements for Matterbridge and YaCy Grid. Matterbridge was going to require a ton of work, for something that I'd probably get little-to-no use out of. YaCy Grid is not ready for prime time, and the current config examples only show off Elastisearch. Not to say that they haven't been working on it, but I'd have to build most of it by hand, which would devour time required for other tasks already planned. The Tor node ended up getting scrapped as well, due to potential risks of hosting said instance. The only thing that survived all of that was the desire to get vanilla Yacy running. On June 7th, I finally managed to give away the Threadripper, freeing up rackspace and the Titan V. On the 10th, I was looking into Docker for Windows containers (no, not that one - the other one with less cringe), for hosting multiple isolated instances of MariaDB. Today, the update to Chromium v102 reigned down on my setup with SSL pains and the terror of frantically scrambling to download/install certificates, while being forced to switch some instances back to HTTP because self-signed certificates are everywhere and sometimes tough to replace. And establishing a proper, self-hosted CA that won't drain you of your funds is tough. Between all of these, I was battling with tight finances because or a college loan and a certificate loan, and stupidly poor air quality due to temperatures+humidity+allergens preventing me from turning on the server (software and driver updates have to happen sometime). This weekend, I was originally supposed to be 1) configuring group policy for restricting Ms telemetry, and 2) setting up a YaCy instance. It's 4am. I still have a ToDo list to update!

/endrant


----------



## TopHatProductions115 (Jun 13, 2022)

The ToDo List for the next 24 months:


```
Current ToDo's:
 - Artix OpenRC:
    - Docker container: YaCy (non-Grid)
        - https://hub.docker.com/r/yacy/yacy_search_server
 - Windows Server 2016:
    - purchase AOMEI Partition Assistant Server license
        - https://www.diskpart.com/partition-manager-server-edition.html
    - purchase (2) more EaseUS Backup licenses for Windows client PCs

Upcoming ToDo's:
 - Windows 10 Enterprise:
    - convert (MBR => GPT, BIOS => UEFI)
 - Windows Server 2016:
    - convert (MBR => GPT, BIOS => UEFI) with AOMEI
    - Upgrade from 2016 to 2019 (friggin' update times)
 - Server/Networking:
    - migrate from vSphere 6.5 to 6.7 (ESXi)

Long-term ToDo's:
 - Server/Networking:
    - purchase HPE ProLiant DL580 Gen9
    - purchase AMD Radeon Pro v340
    - replace Titan Z with Radeon Pro v320/v340
    - Move VMs to new vSphere host (Storage vMotion)
    - VDI host when (DL580 Gen9 planning)
 - macOS Mojave:
    - upgrade to Big Sur 11.2.3
    - update MacPorts and Xcode
 - Artix OpenRC:
    - Docker container: OpenStreetMaps
    - Docker container: OSMR Backend
    - Docker container: izPBX (FreePBX), or
 - FreePBX Distro VM!!
    - https://www.freepbx.org/downloads/
 - Install Arch (OpenRC+ZFS on UEFI) from scratch
    - move all Docker containers to new Arch host
    - or convert existing VM ?!
```

Just got new info about macOS Ventura, thanks to an amazing Reddit user who had the stones to install it in a VM themselves:



__
		https://www.reddit.com/r/macOSVMs/comments/vag02o

https://www.nicksherlock.com/2022/06/installing-macos-13-ventura-developer-beta-on-proxmox-7-2/
AVX2 is the new hurdle, so Haswell is the minimum. ProLiant Gen8 is out, going for the Gen9 instead - which means that I move to DDR4 earlier than anticipated. The platform cost is higher, since I get to reuse less of my current hardware. The move to Windows Server 2019 should make managing updates for that VM much easier. I'm also considering removing Technitium, since I've been mostly relying on Active Directory DNS for the longest time. It was great for when I first started out, since I didn't have any other DNS source on my local network. But, for anyone out there not using AD or similar, Technitium is beautiful! Finances are still tight, and I've had to change some objectives. CUCM is probably out-of-reach at this point, and I still wanna grab a small group of IPv4 addresses. That will have to wait until after I get the new Gen9 server in-house. Also, I need to buy more movies to throw onto PleX - Movie Night won't make itself, ya know!


P.S. Chrome v102 blew out all of my self-signed SSL certificates. I'll be remaking those for the next few weeks, since I decided to be my own CA...


----------



## TopHatProductions115 (Jun 15, 2022)

Time to troubleshoot OnlyOffice+Nextcloud. Happened after moving everything to HTTPS.


----------



## TopHatProductions115 (Jun 16, 2022)

Here we go...






						Okay, so draft of plans for the next…
					

Okay, so draft of plans for the next phase of Project Personal Datacentre (can't account for AVX-512 yet): Hypervisor Host :: [HPE ProLiant DL580 Gen9] OS :: VMware ESXi 6.7u3 Enterprise Plus CPU :: 4x Intel Xeon E7-8890v4's (24c/48t each; 96c/192t total) RAM :: 384GB (4/8 cartridges x 12 RDIMMs ...




					linustechtips.com


----------



## phill (Jun 16, 2022)

Are you getting closer to finishing the project or are you hitting ever known bump in the road getting there?!     

Keep those updates coming!!


----------



## TopHatProductions115 (Jun 30, 2022)

So far, things have been moving at a sporadic pace, depending heavily on my own capabilities. With that said, I guess that 5 years isn't too long for a tech project XD

Spent 2018 experimenting (and breaking things) on a smaller host. 2019 was a major planning year, and funds were almost non-existent. It was also when I finally decided on the form factor - 4U rackmount. 2020 was the first year I could actually start buying most of the hardware. 2021 was when the pace picked up, and I started spinning up more VMs iirc. Now, I'm in 2022, planning the next host.

Time really does fly...


----------



## TopHatProductions115 (Jul 5, 2022)

I wanna replace the MikroTik RB4011iGS+RM and Audience RBD25GR-5HPac with an RB4011iGS+5HacQ2HnD-IN. Simpler setup, and I get to move a WAP somewhere else.


----------



## TopHatProductions115 (Jul 9, 2022)

Recent links:

https://forums.servethehome.com/index.php?threads/looking-at-sas-enclosure-options.36930/
https://linustechtips.com/status/322808/


----------



## phill (Jul 10, 2022)

TopHatProductions115 said:


> So far, things have been moving at a sporadic pace, depending heavily on my own capabilities. With that said, I guess that 5 years isn't too long for a tech project XD
> 
> Time really does fly...


If I can get left alone for 5 minutes that's generally a good day but nothing is never simple or easy that's for sure!!   I mean I've been doing some networking recently, damn somethings I just don't get!!     I need to update my project thread as well....


----------



## TopHatProductions115 (Jul 20, 2022)

Just purchased 7x HGST HUH728080AL4200's - 4 more to go. They're ~80-100 USD a pop. Also waiting to see how and when I'll get the other Dell EMC KTN-STL3, which is 150-300 USD. My next target(s) will be the Xeon E7-8890v4's, which are going for ~250 USD/unit. Once I pull that off, it'll be time to shop for a ProLiant DL580 Gen9 that comes with a Smart Array P830i Controller. Drive trays/caddies will be a challenge, since they tend to be stupidly expensive on the HPE side of things. RAM can always be purchased later, once prices for DDR4 ECC drop (as DDR5 takes over). On a side note, I've also managed to get OnlyOffice EE running with an external MariaDB instance. Working on doing the same for NGINX Proxy Manager, so I can move all database instances to a separate Docker host when the time comes! If anyone thinks I should leave all Docker containers on the same host, let me know. One less VM I suppose...


----------



## TopHatProductions115 (Jul 20, 2022)

A new topic appears:

https://community.searchlab.eu/t/pertaining-to-how-yacy-crawls-websites/1090


----------



## TopHatProductions115 (Jul 23, 2022)

I've purchased the rest of the HGST HUH728080AL4200's. Did some more research, and it looks like all standard models of the DL580 Gen9 came with the Smart Array P830i Controller. If such is the case, then I only need to concern myself with getting more cache modules - which should be cheap. I already have a bunch of potentially compatible drive caddies from my run-in/mishap with a certain ProLiant tower server that I won't mention here. I've already made an offer for 8 memory cartridges (HP 802277-001). Only need to focus on CPUs from here on. Then, to grab the server itself...


----------



## TopHatProductions115 (Jul 26, 2022)

I've grabbed 4 memory cartridges. The Xeons are the only things left to grab before I go for the server itself >


----------



## TopHatProductions115 (Jul 29, 2022)

I managed to get an SPI board for cheap. I guess I'll keep it as a spare, in case anything happens to the one that comes with the DL580 Gen9. The Xeons are next. I'm okay with delaying the purchase of the server itself if I have all of the other parts I need. I can go for the cheapest one available once DDR5 becomes more common, which will make it easier to get one with more ECC RAM at a lower price. HDD photos incoming maybe, if I can get up the energy to take the photos.


----------



## TopHatProductions115 (Aug 1, 2022)

Got a Netapp DS4243, and am now just looking to get the Xeons next. I'll hold off on getting the server until 2023 if necessary, to let prices drop.


----------



## TopHatProductions115 (Aug 3, 2022)

Grabbed 4 of the 656364-B21's, and am now left with only Xeons to grab (again). Also am doing some a bit of asking around before I bring the server in-house:



__
		https://www.reddit.com/r/HomeDataCenter/comments/wegsxj



__
		https://www.reddit.com/r/homelab/comments/weh2xi

As mentioned before, I can wait until next year to get the server itself. As long as I have all of the other parts, I can wait for prices to drop on the Gen9 chassis.


----------



## claes (Aug 3, 2022)

TopHatProductions115 said:


> Just purchased 7x HGST HUH728080AL4200's - 4 more to go. They're ~80-100 USD a pop.


Gotta ask, where’d you grab these for such a great price?!


----------



## TopHatProductions115 (Aug 3, 2022)

claes said:


> Gotta ask, where’d you grab these for such a great price?!


eBay, with a ton of Best Offers and patience. Many sellers don't even respond to offers unless it's less than 15% off, so you'll wanna split your required quantity amongst multiple sellers. If you need 16 items in total, and have 3 possible sources, make an offer involving quantities of 5 per seller. Eliminate the sellers that ask for the highest price or won't budge, and increase unit count with each offer to the remaining sellers.


----------



## phill (Aug 4, 2022)

I try my luck with any sale I'm after simply because if you don't ask you never know 

The HGST's I'd bought recently where about £65 for 8TB each and I can honestly say I couldn't find them cheaper.  I managed to get cash off of them as well which was great but very happy with the drives    Need to get 8 installed in my homeserver as that seems to have taken a bit of a dive...  More fun to deal with!  
I know my mate in the US does sell HDs and he's great bloke   He might be able to help if you need any details he's a member here, so I can tag him if need be


----------



## TopHatProductions115 (Aug 5, 2022)

I grabbed the Xeons and the RAM today. Now just waiting to see if I can get a decent offer on the server/chassis itself. That should help to keep the price on it low.


----------



## TopHatProductions115 (Aug 7, 2022)

Just applied this on my Windows Server VM:

https://helpdeskgeek.com/windows-10/how-to-disable-windows-10-telemetry/
Now when I switch to my domain account in 2022/2023, I'll have one less thing to worry about...


----------



## TopHatProductions115 (Aug 7, 2022)

Decisions, decisions...






						I have a few small decisions to make,…
					

I have a few small decisions to make, pertaining to the next version of the server project: Should I implement VDI (VMware Horizon) ? Should I attempt to implement Android app streaming (Android-x86) ? Should I merge the two Linux VMs, or leave them as they are? The first 2 both require creation ...




					linustechtips.com


----------



## TopHatProductions115 (Aug 12, 2022)

I've got everything but the server and its rail kit at this point. Bumped RAM up to 384GB. I'll wait until 2023 if necessary, to get a good price on the remaining pieces. More pre-planning underway - stay tuned...


----------



## TopHatProductions115 (Aug 21, 2022)

Currently looking into switching from Remotix (bought out by Acronis) to RustDesk. RustDesk isn't as fast or performant as the former, but it is free and open source. Also won't have to worry about changes in licensing. Only issue is, it's not working on my current tablet. The app glitches out on it, so I have to go back to managing everything on just PC. Also considering using MikroTik's Dude instead of LibreNMS. Getting the DL580 Gen9 looks like it will most likely have to wait until late 2022/early 2023 at the earliest. Most sellers want at least 700-800 USD for a barebones config, which is kinda tough to pull off at the moment. Also wishing I could have gotten the Radeon Pro v340, but those are going for 1.5k USD currently. My current objective is to work on prepping the containers and VMs for when it's finally time to pull the trigger on getting the Gen9 in-house. Moving Nextcloud's db to Docker is going to be nerve-wracking for me, since if anything goes wrong, I may have to rebuild that instance from scratch. Not a fun idea. Open Street Maps + OSMR (Docker stack) and FreePBX (VM) tasks are next...


----------



## TopHatProductions115 (Aug 28, 2022)

__
		https://www.reddit.com/r/docker/comments/wzn65q


----------



## TopHatProductions115 (Sep 1, 2022)

The memory requirements for the Linux VM appear to have more-than-doubled. I may have to give it upwards of 80GB of RAM to get OSRM past the extraction stage...


----------



## TopHatProductions115 (Sep 1, 2022)

Pertaining to the next phase...

The Linux VM, without YaCy, will need at least 80-96GB of RAM. I think the dedicated DB host will have to go, seeing that 85% of that RAM will be OSRM alone. YaCy will need some for web crawls and indexing as well. Those two are very heavy. Also very close to just yolo'ing the Bliss OS VM and going for it either way. At least I may end up with some resources to spare as a result.


----------



## TopHatProductions115 (Sep 5, 2022)

Just finished troubleshooting an unexpected Artix issue. Still need to config FreePBX…


----------



## TopHatProductions115 (Sep 11, 2022)

Onto the next task(s):

https://help.nextcloud.com/t/configuring-custom-tile-server-in-maps/145394


__
		https://www.reddit.com/r/HomeDataCenter/comments/xb7g3n


----------



## TopHatProductions115 (Sep 11, 2022)

August has been a long month, and it's been a long time coming.

Back in early-August, I ended up delaying the purchase of the DL580 Gen9. This threw a wrench into my previous plans since I had the Gen9 planned with over 300GB of RAM. I was depending on the larger pool of RAM being available for things like OSRM, YaCy, Android VM, etc. The first thing I did was update Windows Server 2016 to 2019. I finally got dark theme, but my previous license had to be replaced. Still a little peeved about that unexpected cost. Kicked out Technitium DNS since it was redundant to AD DNS at this point, and added OCCWeb to Nextcloud for easier updates in the future (running commands after initial update).

At about half-way through August, I decided to put OSRM on the table. After a bunch of reading, interpreting, getting help, and guessing, I finished figuring out that thing late last week. Didn't even find this page until I was near the end. I'd say kill the page being used on Docker Hub for OSRM and replace it with that instead. Would have saved me a ton of time last month! By the way, I ran into this little issue in the middle of troubleshooting OSRM. No biggie, finally finished getting OSRM to work at the end of August.

I was supposed to get started with FreePBX back in August, so the I could work primarily on YaCy this month and into October. That's not going to happen. At least I managed to shove RustDesk in there somehow. I also had to re-build xBrowserSync, since I accidentally broke that container while trying to change its IP address. Also had time to setup GPO for local admin account(s). So it wasn't a complete bust in falling behind.

Still need to look into using custom tile servers with Nextcloud Maps (unknown) and PhoneTrack (yes), finish setting up FreePBX, switch Google Voice to VoIP.ms (port my number over), start working on YaCy Grid, Convert Windows Server and Windows 10 VMs to GPT/UEFI, Update Nextcloud, and buy a bigger SSD for my current laptop. Wait - almost forgot that I need more EaseUS backup licenses for my setup. This all needs to happen before mid-2023, preferably.

I'm also considering doing a clean install of Artix OpenRC, due to a small issue I've been having since 2021. On the bright side, I kinda want a new desk and rack!

Something tells me that I'm more than a month behind XD


----------



## TopHatProductions115 (Sep 11, 2022)

btdubs, vSphere 6 is now considered old - welcome to vSphere 8! But the Gen9 is only scheduled to run vSphere 6.7.

What is sleep?!


----------



## TopHatProductions115 (Sep 14, 2022)

Finally managed to get incoming calls working (albeit with meh audio quality) on FreePBX. Tested using MicroSIP for softphone. Still need to get outbound calls working. Used this tutorial to get everything configured properly. Once FreePBX is working as intended, it'll be time for YaCy Grid...


----------



## TopHatProductions115 (Sep 15, 2022)

New threads:



__
		https://www.reddit.com/r/artixlinux/comments/xf74y0



__
		https://www.reddit.com/r/artixlinux/comments/xf7n77



__
		https://www.reddit.com/r/freepbx/comments/xfe690


----------



## TopHatProductions115 (Sep 22, 2022)

Summary of recent changes thus far:

Finally found an easy DDNS solution.
Still troubleshooting that issue with FreePBX
Converted Windows Server, Win10, Artix to UEFI
Troubleshooting potential permissions issue in elastisearch (YaCy Grid)
Still have to convert/reinstall FreePBX to GPT/UEFI this weekend. After that, I can start working on the Bliss OS VM...


----------



## TopHatProductions115 (Sep 27, 2022)

I was supposed to troubleshoot the outbound calling issue on FreePBX this weekend, but ended up going out-of-town to a place where WiFi and cell reception were meh. I enjoyed myself and got to see a movie. When I got back (last night), I stayed up way past midnight to backup>reinstall>restore FreePBX on UEFI. Did not feel too hot at work today, but that's one less task left. Once outbound calling works, I need to port my Google Voice number over and work on configuring SMS. Then I'll be working on the YaCy Grid container. I'm considering putting Sunshine onto all GPU-equipped VMs in the near future. It'd be a nice alternative to RustDesk, until they finally introduce GPU acceleration. Still need to plan out the Bliss OS (Android) VM, and that could use a GPU (Rx 6700?). Should I move the G7 to ESXi 6.7, and have the Gen9 running vSphere 7?

I may have forgotten something(s) at this point, but gotta keep moving...


----------



## TopHatProductions115 (Sep 28, 2022)

Finally resolved the outbound calling issue. Now I'm focusing on an issue with background noise during calls. Once I get SMS working, I'll make the decision on whether to port my Google Voice number over to VoIP.ms. After that, I'll be working on the YaCy Grid container. While I would like to have Sunshine on all GPU-equipped VMs, I'm not sure how practical it'd be to implement (esp. seeing that I already have RustDesk). The Bliss OS (Android x86) VM will be coming later this year, and will be using a GPU (Rx 6700 XT). Once all VMs are ready, I'll move from ESXi 6.5u3 to 6.7u3. Still need to purchase EaseUS Backup Server licenses for my remaining devices (that have no current backups). Still haven't figured out how VDI will happen on the Gen9...

On a side note, I now wonder if the Linux version of Sunshine can be built to run on Android...


----------



## TopHatProductions115 (Oct 5, 2022)

Another day, another FreePBX issue to troubleshoot. This time, trying to configure SMS/MMS.


----------



## TopHatProductions115 (Oct 10, 2022)

Should I kick out the Radeon Pro v320, in favour of the Pro Duo (Fiji) instead? Keep in mind, the v320 is supposed to replace the GTX Titan Z (a dual-GPU card). Dual-GPU cards can potentially be used in 2 separate VMs simultaneously, without the need for SR-IOV or GRID. The only issue would be video output(s).


----------



## TopHatProductions115 (Oct 14, 2022)

Getting ready to make a major decision/change for the next phase...

https://linustechtips.com/status/326417/


----------



## TopHatProductions115 (Oct 16, 2022)

The Radeon Pro Duo arrived in the mail today. Still have to install and test it. That will happen either tonight or tomorrow...


----------



## TopHatProductions115 (Nov 10, 2022)

I've been on a mission today, ever since the flashed FirePro S9300 x2 arrived in the mail:

https://www.insanelymac.com/forum/topic/354735-radeon-r9-nanofury-x-support-question/
https://www.techpowerup.com/forums/threads/pertaining-to-the-firepro-s9300-x2.276696/#post-4878657
https://www.techpowerup.com/forums/threads/amd-radeon-pro-v320-v340.293530/#post-4878664
The Radeon Pro Duo is ded. Long live the FirePro S9300 x2!

On a side note, I've preemptively replaced the HPE PCIe ioDuo MLC 1.28TB I/O Accelerator (641255-001) and SanDisk Fusion ioScale MLC 3.2TB Accelerator (F11-002-3T20-CS-0001). I may bring them back if the Gen9 has room for them...


----------



## TopHatProductions115 (Nov 10, 2022)

BlissOS didn't go over too well last night. Time for some troubleshooting...


----------



## TopHatProductions115 (Nov 29, 2022)

From what I've done on my end, it looks as though the FirePro S9300 x2 behaves well in a macOS guest (at least Mojave) on vSphere. From what I've watched online, the FirePro S9300 x2 should also behave when split up between multiple Linux KVM guests (Windows 11 - may also apply to Windows 10). Pretty sure this card runs just fine in a Linux guest as well. In all of the tests/scenarios that I've mentioned, the FirePro was flashed to either act as a Radeon Rx Fury or Nano (consumer variants) - though the Radeon Pro Duo also existed. I'm thinking that BlissOS could just be an outlier in this case, and a rabbit hole too deep for me to go down for this project.

As such, unless a software update for BlissOS fixes this oddity before 2023, I'm kicking it from the project for the next year or two. I'll be focusing on LibreNMS as the last major task for this phase of the server project, until I move to the Gen9 server. When I move to the Gen9 server, I'll possibly want more of the FirePro S9300 x2, oddly enough. While it's an old card, it also fills in a gap - the need for multiple GPUs in a single PCIe slot, for a (relatively) affordable price. Its space efficiency and monetary benefits are tough to ignore when SR-IOV and GRID are currently either too expensive for me to implement or locked behind secret handshakes and the need to be a cloud provider.


----------



## TopHatProductions115 (Dec 8, 2022)

I've installed LibreNMS, haven't learned how to get device auto-detection working yet. Installed Cronicle and used it to resolve a scheduled task(s) issue with Nextcloud. Now working on enabling Nextcloud push_notify and learning more about LibreNMS. BlissOS is gone from the project, and I'm closing in on the last major tasks of this phase of the server project. The next phase requires the Gen9, and I can't hop onto that just yet. Also wanting to get a 2nd FirePro S9300 X2 and a Titan RTX...


----------



## TopHatProductions115 (Dec 15, 2022)

In the wake of still not having figured out LibreNMS's device/host auto-detect, I've gone on and added many of my commonly-accessed app/service IPv4 addresses by hand. Those include:

OOB management appliances
multi-node/cluster management instances
individual virtual machines
hypervisor hosts
default gateway for network bridge
I'm also running a simple/quick nmap scan, to look for any obvious hosts that I missed. I've avoided adding:

Docker containers
switches that comprise the network bridge
for the time being. All of my Docker containers are on one VM. If I ever want to analyse traffic for an individual container, I can still add their individual hostnames later. As for the network switches, all traffic going through them either originates from the default gateway or the DL580 itself (either hypervisor host or one of the individual VMs). If the time ever comes, I can add the switches later as well.

I also took some time to review the DNS records on Cloudflare, and should be a little closer to having proper DMARC/DKIM/SPF. Not perfect by any means, before anyone gets ideas. It's tough to get this crap done right.

Getting Nextcloud's notify_push to work is proving to be very tough. I was hoping to have that and Spreed/Talk HPB running by the end of the year, but I've come to the conclusion that it probably won't happen.

Started looking into ARM servers, just to see what's available on the used market. The answer is, nothing affordable - at least in my area. Was wondering if I could maybe play around with ESXi on ARM64, maybe have an AOSP VM or four? Yeah, that's out the window.

Still waiting to move to the Gen9 in the future...


----------



## TopHatProductions115 (Dec 19, 2022)

Converted the CentOS Stream VM to Rocky Linux:

https://docs.rockylinux.org/guides/migrate2rocky/


----------



## TopHatProductions115 (Dec 26, 2022)

I purchased a 2nd FirePro S9300 X2. Can't wait to see if I can fit it in the DL580 Gen9...


----------



## Toothless (Dec 26, 2022)

TopHatProductions115 said:


> I purchased a 2nd FirePro S9300 X2. Can't wait to see if I can fit it in the DL580 Gen9...


Pics pls. Dual GPUS are so pretty.


----------



## TopHatProductions115 (Dec 26, 2022)

Toothless said:


> Pics pls. Dual GPUS are so pretty.



It hasn't arrived, but it'll hopefully look like the last one I purchased:

https://linustechtips.com/status/327388/

Also waiting to upgrade to this when I move to the Gen9:

https://linustechtips.com/status/328631/


----------



## TopHatProductions115 (Sunday at 7:14 AM)

Just ran into this:

https://forums.macrumors.com/threads/inateck-ku5211-r-usb-3-2-anyone.2275931/
Went on and grabbed the Sonnet card, in case this comes up in macOS Big Sur.


----------

