Wednesday, June 27th 2018

GIGABYTE Expands AMD EPYC Family with New Density Optimized Server

GIGABYTE continues our active development of new AMD EPYC platforms with the release of the 2U 4 Node H261-Z60, the first AMD EPYC variant of our Density Optimized Server Series. The H261-Z60 combines 4 individual hot pluggable sliding node trays into a 2U server box. The node trays slide in and out easily from the rear of the unit.

EPYC Performance
Each node supports dual AMD EPYC 7000 series processors, with up to 32 cores, 64 threads and 8 channels of memory per CPU. Therefore, each node can feature up to 64 cores and 128 threads of compute power. Memory wise, each socket utilizes EPYC's 8 channels of memory with 1 x DIMM per channel / 8 x DIMMS per socket, for a total capacity of 16 x DIMMS per node (over 2TB of memory supported per each node ).
Maximum compute in this system can enable data center footprints to be reduced by up to 50% compared with a standard 1U dual socket server. And GIGABYTE has recently demonstrated that our server design is perfectly optimized for AMD EPYC by achieving one of the top scores of the SPEC CPU 2017 Benchmark for AMD EPYC single socket & dual socket systems.

R151-Z30 achieved highest SPEC CPU 2017 performance benchmark for single-socket AMD Naples platform vs other vendors as of May 2018

R181-Z91 achieved second highest SPEC CPU 2017 performance benchmark for dual-socket AMD Naples platform vs other vendors as of May 2018

Ultra-Fast Storage Support
In the front of the unit are 24 x 2.5" hot-swappable drive bays, offering a capacity of 6 x HDD or SSD SATA / SAS storage drives per node. In addition, each node features dual M.2 ports (PCIe Gen3 x 4) to support ultra-fast, ultra-dense NVMe flash storage devices. Dual M.2 support is double the capacity of competing products on the market.

Best-In Class Expansion Flexibility
Dual 1GbE LAN ports are integrated into each node as a standard networking option. In addition, each node features 2 x half-length low profile PCIe Gen3 x 16 slots and 1 x OCP Gen3 x 16 mezzanine slot for adding additional expansion options such as high speed networking or RAID storage cards. GIGABYTE delivers best-in class expansion slot options for this form factor.

Easy & Efficient Multi-Node Management
The H261-Z60 features a system-wide Aspeed CMC (Central Management Controller) and LAN module switch, connecting internally to Aspeed BMCs integrated on each node. This results only in one MLAN connection required for management of all four nodes, resulting in less ToR (Top of Rack) cabling and less ports required on your top of rack switch (only one port instead for four required for remote management of all nodes).

Ring Topology Feature for Multi-Server Management
Going a step further, the H261-Z60 also features the ability to create a "ring" connection for management of all servers in a rack. Only two switch connections are needed, while each server is connected to each other in a chain. The ring will not be broken even if one server in the chain is shut down. This can even further reduce cabling and switch port usage for even greater cost savings and management efficiency.

Optional Ring Topology Kit must be added

Efficient Power & Cooling
GIGABYTE's H261-Z60 is designed for not only greater compute density but also with better power and cost efficiency in mind. The system architecture features shared cooling and power for the nodes, with a dual fan wall of 8 (4 x 2) easy swap fans and 2 x 2200W redundant PSUs. In addition, the nodes connect directly to the system backplane with GIGABYTE's Direct Board Connection Technology, resulting in less cabling and improved airflow for better cooling efficiency.

GIGABYTE's unrivalled expertise and experience in system design leverages and optimizes AMD EPYC's benefits to offer to our customers a product extremely on-point to meet their needs for maximized compute resources in a limited footprint with excellent expansion choices, management functionality and power & cooling efficiency.

Please visit b2b.gigabyte.com for more information on our complete product range.
Add your own comment

11 Comments on GIGABYTE Expands AMD EPYC Family with New Density Optimized Server

#1
AnarchoPrimitiv
That's some serious density with respect to x86 cores....so 256 cores/512 threads per 2U enclosure? Crazy, and 16TB of RAM per 2U enclosure? Isn't Epyc 2 going to have 64core processors? So then it's be 512 cores per 2U?
Posted on Reply
#2
CheapMeat
Wow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?
Posted on Reply
#3
Frick
Fishfaced Nincompoop
I had no idea Gigabyte made server stuff as well. How big are they?
Posted on Reply
#4
HTC
Stupid question: how do they cool the CPUs in these?
Posted on Reply
#5
GreiverBlade
HTCStupid question: how do they cool the CPUs in these?
well already answered, right?

"Efficient Power & Cooling
GIGABYTE's H261-Z60 is designed for not only greater compute density but also with better power and cost efficiency in mind. The system architecture features shared cooling and power for the nodes, with a dual fan wall of 8 (4 x 2) easy swap fans and 2 x 2200W redundant PSUs. In addition, the nodes connect directly to the system backplane with GIGABYTE's Direct Board Connection Technology, resulting in less cabling and improved airflow for better cooling efficiency. "

and with standard chunk of metal heatsink


as they always did in low U server rack
Posted on Reply
#6
TheinsanegamerN
HTCStupid question: how do they cool the CPUs in these?
Similar to BTX cooling. Fans on the front of the machine draw in cool air and push it through the machine, you can cool quite high TDPs in that manner. Look up dell optiplex 620 SFF desktops, they were only about 3 inches tall but could cool 140 watt pentium D ovens with a single 80mm fan. The processors in the sever have simple copper heatsinks that go almost to the top of the server, forcing the cold air to flow over them before exiting the server.

These will also typically be put into server rooms, and server rooms are designed specifically for cooling. the front of the server will face the "cold row" where powerful air conditioners keep the temperature around 60F and pump tons of air in, creating positive air pressure into the server. Ours at work have 3 AC outlets per 5 rows of servers, and each of the 3 units pumps out over 16000CFM. The forced cold air keeps heat from building up. Heat is pumped into the other side of the server rack where the AC system pulls its air from to keep air moving. Some low power servers can run without their fans without overheating due to the sheer amount of cold air being pumped in (found that the hard way when a server's fan controller blew up over the weekend, couldnt get parts until the following Wednesday. ran a little warmer, but never got over 60C. Not bad for dual 6 core xeons and 24 HDDs.)
FrickI had no idea Gigabyte made server stuff as well. How big are they?
Not very big. They dont go for the same market as dell/HP/lenovo. gigabyte servers are more common in smaller businesses where there may be 2 or 3 servers for an entire company.
Posted on Reply
#8
uberknob1
HTCStupid question: how do they cool the CPUs in these?
Ever heard a server fan? like a jet engine taking off.. usually servers are stored out of the way from users and people in general, with their high CPU and core count they usually require a shed load of air being moved to cool them which generates a lot of noise but as this is an after thought compared to a high spec PC in which performance and noise might be paramount it doesn't really matter all that much, not too say that they probably also have air con being pumped into the room where they sit, performance and reliability being king, noise is an afterthought when it comes to a server environment
Posted on Reply
#9
IceShroom
CheapMeatWow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?
Current generation Zen basesd EPYC cpus support 1 & 2 socket per board. Thats why 2 seperate board.
Posted on Reply
#11
Imsochobo
CheapMeatWow, that is indeed EPIC. I wonder if there's a quad socket board out there for EPYC? I know it goes a bit against the grain from the marketing push again Intel. But curious as to why two boards instead of one board for density. I'm guessing cost and complexity?
4P servers are not offered by AMD cause the nichè is so nichè that it doesn't make sense.
I doubt even Intel finds it worth doing but they currently still are.

If software went to more core licensing instead of cpu that would change.
Posted on Reply
Add your own comment
Aug 14th, 2024 14:29 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts