- Joined
- May 6, 2018
- Messages
- 1,150 (0.48/day)
- Location
- Upstate NY
System Name | Dual Socket HP z820 Workstation |
---|---|
Processor | Twin Intel Xeon E5 2673 v2 OEM processors (thats a total of 16C/32T) |
Motherboard | HP Dual Socket Motherboard |
Cooling | Stock HP liquid cooling |
Memory | 64GB Registered ECC memory kit (octal channel memory on this rig) |
Video Card(s) | MSI RX 5700 XT Gaming X 8GB |
Storage | 2 x 512GB SSD in raid 0 |
Display(s) | Acer 23" 75Hz Gaming monitors 1080P x2 |
Case | Brushed Aluminium |
Audio Device(s) | Integrated (5.1) |
Power Supply | HP 1125W Stock PSU |
Mouse | gaming mouse |
Keyboard | Dell |
Software | Windows 10 Pro |
Hello fellow computer enthusiasts!
The Server at the heart of this little project is a Dell PowerEdge C4130 with provisions internally for up to 4 full sized PCIe GPUs located just behind the front grill. Consequently, it has a very deep 1U design / form factor. I am eyeing the Nvidia Tesla K80 for GPU choice, because it's actually two GPUs housed on one board, each with a dedicated 12GB of GDDR5 memory. I have a source for brand new in box (new old stock) K80s that run about $300 a pop so that is likely the route I will be taking. This project is purely done out of my passion for computers and I'm talking this time to learn as much as possible about using GPUs for general computing at the same time. I intend to use this server for GPGPU related tasks, but I have nothing definitive planned in terms of it's actual usage case. That's partly why I'm here --- I'd like to know what you guys think I could do with something like this, given the supplied hardware configurations I've put forward for both the Dell C4130 as well as the Dell PowerEdge R620 that I will be using in parallel as in a cluster of sorts.
K80 Background (credits to TPU):
The Tesla K80 was a professional graphics card by NVIDIA, launched in November 2014. Built on the 28 nm process, and based on the GK210 graphics processor, in its GK210-885-A1 variant, the card supports DirectX 12. The GK210 graphics processor is a large chip with a die area of 561 mm² and 7,100 million transistors. Tesla K80 combines two graphics processors to increase performance. It features 2496 shading units, 208 texture mapping units, and 48 ROPs, per GPU. NVIDIA has paired 24 GB GDDR5 memory with the Tesla K80, which are connected using a 384-bit memory interface per GPU (each GPU manages 12,288 MB). The GPU is operating at a frequency of 562 MHz, which can be boosted up to 824 MHz, memory is running at 1253 MHz.
Being a dual-slot card, the NVIDIA Tesla K80 draws power from 1x 8-pin power connector, with power draw rated at 300 W maximum. This device has no display connectivity, as it is not designed to have monitors connected to it. Tesla K80 is connected to the rest of the system using a PCI-Express 3.0 x16 interface. The card measures 267 mm in length, and features a dual-slot cooling solution.
Additional Specs on the Nvidia Tesla K80 GPU:
Memory Bus 384 bit
Bandwidth 240.6 GB/s
Bus Interface PCIe 3.0 x16
Base Clock 562 MHz
Boost Clock 824 MHz
Memory Clock 1253 MHz (5 Gbps effective)
Transistors 7,100 million
Pixel Rate 42.85 GPixel/s
Texture Rate 171.4 GTexel/s
FP32 (float) performance 4.113 TFLOPS
FP64 (double) performance 1,371 GFLOPS (1:3)
Shading Units 2496
TMUs 208
ROPs 48
TDP 300 W
So this server has provisions for up to two 1.8" USATA SSDs for your OS related partition, etc. Thing is, the OEM Dell SSD drives sometimes still go for upwards of $400, even in used condition. According to dell documentation, these run on a Usata interface and are described as being SSDs with a 1.8" form factor design. Try as I may, I am having a difficult time getting clarity on exactly what I can use as a substitute for the overpriced oem SSDs. Because I don't want to have to pay through the nose for OEM drives if I can help it. I can get an adapter from a 1.8" interface and converting to a standard 2.5", but I'm still not even sure if that will work. There also seems to be some confusion about exactly what interface Dell is using here. Some people are telling me it's usata, and others are telling me it's msata. I have attached pictures of the actual SSD 1.8" drive bays (in the C4130 server) and pictures of the specific sata connection located on the server board. Ideally, I want to convert from 1.8" SSD to the standard 2.5" sata interface so I can run a standard SSD. This will obviously require an adapter and then an extension since the bigger 2.5" drives will not fit in the 1.8" drive bays. So it will be a little messy here but I cant think of any other way to do this. Any suggestions here are appreciated on how I can get around this little interface issue in the most painless way possible.
As I said earlier, I will be using a Dell PowerEdge R620 that I have laying around here (as you can see in some of the pictures, both servers will need a good cleaning before putting them back into service. Sorry they are pretty dusty right now. Some of the parts and upgrade plans are listed below.
The Server at the heart of this little project is a Dell PowerEdge C4130 with provisions internally for up to 4 full sized PCIe GPUs located just behind the front grill. Consequently, it has a very deep 1U design / form factor. I am eyeing the Nvidia Tesla K80 for GPU choice, because it's actually two GPUs housed on one board, each with a dedicated 12GB of GDDR5 memory. I have a source for brand new in box (new old stock) K80s that run about $300 a pop so that is likely the route I will be taking. This project is purely done out of my passion for computers and I'm talking this time to learn as much as possible about using GPUs for general computing at the same time. I intend to use this server for GPGPU related tasks, but I have nothing definitive planned in terms of it's actual usage case. That's partly why I'm here --- I'd like to know what you guys think I could do with something like this, given the supplied hardware configurations I've put forward for both the Dell C4130 as well as the Dell PowerEdge R620 that I will be using in parallel as in a cluster of sorts.
K80 Background (credits to TPU):
The Tesla K80 was a professional graphics card by NVIDIA, launched in November 2014. Built on the 28 nm process, and based on the GK210 graphics processor, in its GK210-885-A1 variant, the card supports DirectX 12. The GK210 graphics processor is a large chip with a die area of 561 mm² and 7,100 million transistors. Tesla K80 combines two graphics processors to increase performance. It features 2496 shading units, 208 texture mapping units, and 48 ROPs, per GPU. NVIDIA has paired 24 GB GDDR5 memory with the Tesla K80, which are connected using a 384-bit memory interface per GPU (each GPU manages 12,288 MB). The GPU is operating at a frequency of 562 MHz, which can be boosted up to 824 MHz, memory is running at 1253 MHz.
Being a dual-slot card, the NVIDIA Tesla K80 draws power from 1x 8-pin power connector, with power draw rated at 300 W maximum. This device has no display connectivity, as it is not designed to have monitors connected to it. Tesla K80 is connected to the rest of the system using a PCI-Express 3.0 x16 interface. The card measures 267 mm in length, and features a dual-slot cooling solution.
Additional Specs on the Nvidia Tesla K80 GPU:
- 4992 NVIDIA CUDA cores with a dual-GPU design (two GK210 chips on same board)
- Up to 2.91 teraflops double-precision performance with NVIDIA GPU Boost
- Up to 8.73 teraflops single-precision performance with NVIDIA GPU Boost
- 24 GB of GDDR5 memory
- 480 GB/s aggregate memory bandwidth
- ECC protection for increased reliability
- Server-optimised to deliver the best throughput in the data center
Memory Bus 384 bit
Bandwidth 240.6 GB/s
Bus Interface PCIe 3.0 x16
Base Clock 562 MHz
Boost Clock 824 MHz
Memory Clock 1253 MHz (5 Gbps effective)
Transistors 7,100 million
Pixel Rate 42.85 GPixel/s
Texture Rate 171.4 GTexel/s
FP32 (float) performance 4.113 TFLOPS
FP64 (double) performance 1,371 GFLOPS (1:3)
Shading Units 2496
TMUs 208
ROPs 48
TDP 300 W
So this server has provisions for up to two 1.8" USATA SSDs for your OS related partition, etc. Thing is, the OEM Dell SSD drives sometimes still go for upwards of $400, even in used condition. According to dell documentation, these run on a Usata interface and are described as being SSDs with a 1.8" form factor design. Try as I may, I am having a difficult time getting clarity on exactly what I can use as a substitute for the overpriced oem SSDs. Because I don't want to have to pay through the nose for OEM drives if I can help it. I can get an adapter from a 1.8" interface and converting to a standard 2.5", but I'm still not even sure if that will work. There also seems to be some confusion about exactly what interface Dell is using here. Some people are telling me it's usata, and others are telling me it's msata. I have attached pictures of the actual SSD 1.8" drive bays (in the C4130 server) and pictures of the specific sata connection located on the server board. Ideally, I want to convert from 1.8" SSD to the standard 2.5" sata interface so I can run a standard SSD. This will obviously require an adapter and then an extension since the bigger 2.5" drives will not fit in the 1.8" drive bays. So it will be a little messy here but I cant think of any other way to do this. Any suggestions here are appreciated on how I can get around this little interface issue in the most painless way possible.
As I said earlier, I will be using a Dell PowerEdge R620 that I have laying around here (as you can see in some of the pictures, both servers will need a good cleaning before putting them back into service. Sorry they are pretty dusty right now. Some of the parts and upgrade plans are listed below.