Friday, October 22nd 2021

GIGABYTE Z690 AERO D Combines Function with Absolute Form

GIGABYTE's AERO line of motherboards and notebooks target creators who like to game. The company is ready with a premium motherboard based on the Intel Z690 chipset, the Z690 AERO D. This has to be the prettiest looking motherboard we've come across in a long time, and it appears to have the chops to match this beauty. The Socket LGA1700 motherboard uses large ridged-aluminium heatsinks over the chipset, M.2 NVMe slots, and a portion of the rear I/O shroud. Aluminium fin-stack heatsinks fed by heat-pipes, cool the CPU VRM. You get two PCI-Express 5.0 x16 slots (x8/x8 with both populated). As an AERO series product, the board is expected to be loaded with connectivity relevant to content creators, although the box is missing a Thunderbolt logo. We expect at least 20 Gbps USB 3.2x2 ports, and 10 GbE networking, Wi-Fi 6E.
Source: HW_Reveal (Twitter)
Add your own comment

40 Comments on GIGABYTE Z690 AERO D Combines Function with Absolute Form

#26
tabascosauz
NanochipHopefully it has thunderbolt implemented. Like it’s Vision and Designaire predecessors.
AERO is just a continuation of Vision. Most likely the AERO D and G boards continue to be distinguished based on the D model having TB (TB4) and a POST code while the G does not.
Posted on Reply
#27
Mussels
Freshwater Moderator
TigerfoxAgain, not a single x1-Slot! Whatever happened between Z390 and now, where have all those slots gone? Where am I to put my soundcard, additional 2.5GbE/5GbE-NICs, TV-card etc?
Well you have 3 slots, so GPU and two addons - x1 cards still work.

USB 3.2 5Gb/10Gb/20Gb can certainly handle most types of add on devices.
Posted on Reply
#28
micropage7
sometimes i feel old school motherboard heatsink has more interesting
Posted on Reply
#29
Valantar
asdkj1740intel rules: on z590 cpu x16 lanes can be split into different slots, b560 cant.
so when we talk about an eligible chipset like z590, there is no further limitation on how x16 can be split into different slots by mobo vendors.
actually, the way gigabyte and msi did on their z490 to upgrade the same m.2 slot from gen3 by chpipset to gen4 by CPU while 11th gen CPU is installed, is almost identical to what gigabyte did on z390 designare bottom x4 pcie slot.

we don't discuss 8+4+4 on b560 as intel has it completely banned on b560, not even 8+8, no matter how many mux/demux used.
btw thanks to intel, intel still locks the first x8 to be x8 but not x4+x4 like what amd allows on b550/x570 (4+4+4+4). so only three devices can be recognized on the same first pcie x16 slot even having x16 lanes inside.

8+4+4 is actually quite common these days on z490 z590 b550 x570 because m.2 is involved. so many z490 & z590 mobos' m.2 slots got lanes by cpu.
strix e / f / a are all the same, they all support 8+4+4, the last x4 goes to m.2 slot.
strix e is unique as it support also support 8+8 on two pcie slots, by having total six "ic" responsible for that. strix f and strix a got four ic only.
this type of ic are quite expensive otherwise we should have seen them being used more on entry b550 models to compensate only 10 lanes provided by b550 chipset, of course plx's prices is mad.
I know Bx60 boards can't do bifurcation (that's why they never had SLI certifications after all), I was talking about allowed/possible ways of bifurcating the x16 slot. Back in the day, bifurcating down to x4 wasn't possible - I've seen that in practice on many ITX boards with people using bifurcation risers to add extra m.2 and the like. But it seems that has changed, thankfully (I would guess Intel has moved from a 2x8 controller to a 4x4 controller for the x16 slot, though that's just a guess).

As for B550 boards, in my experience people have been furious about the few boards that are audacious enough to split the PEG port into x8 + m.2 slots - from what you're saying it seems attitudes have changed. Or it might just be the whole "if AMD does something it's bad, if Intel does the same it's great" phenomenon rearing its head once again. Either way, IMO this is a good thing, but again, it doesn't relate to the core of what we were discussing here: whether it's somehow bad to "restrict" your GPU to x8 for the sake of some other (likely low-bandwidth AIC).
asdkj1740pi3dbs16412, a popular one on x570 b550 z490, is called "multiplexer/demultiplexer switch" by diodes/pericom.
since they are responsible for pcie x2 each, we generally call them pcie switch...(switch/quick swich/pcie switch/mux demux).
Yes, that is indeed a
Valantarmultiplexer/mux
I don't know who you're talking about when you say "we", but in PC hardware circles "PCIe switch" generally means a PLX switch, with muxes being called muxes specifically to differentiate these two very different categories of ICs, which have dramatically different funcitons. Yes, they both technically switch circuits in some way, but that's about as much as they have in common. If you're talking about a mux, call it a mux, as "switch" is not an accurate term for what it does in any frequently used meaning of the word. As for your claim that
asdkj1740this type of ic are quite expensive
No. The one you mention is €2.50 if you as a consumer are buying one from Mouser, but with volume pricing a full 3500-piece reel brings that down to €1.34/piece. And that's from a distributor - motherboard makers will be buying these directly from the manufacturer in even higher quantities, which means significantly lower prices. Definitely less than $1 apiece. Now, that isn't insignificant - ten of them would then mean a $10-ish BOM increase, which is deifnitely noticeable - but it's not expensive. Compared to actual PCIe switches it's nothing - a 4-lane, 4-port (i.e. up to 1-to-3 layout) PCIe 2.0 switch is €22.86 (with volume pricing bringing that down to €20 at 500 or more), with higher lane/port counts and data rates easily exceeding €100 apiece.
Posted on Reply
#30
Sabishii Hito
TheLostSwedeNot many people were laid off in Taiwan, since the pandemic has not really affected Taiwan.
One benefit of being a nation that understands what's going on in the PRC and being able to act on it early.
Not to mention not being a nation of idiots that think personal rights are more important than public health and think their undedicated opinions are equally valid to those of experts in the field of medicine.
Posted on Reply
#31
TheLostSwede
News Editor
micropage7sometimes i feel old school motherboard heatsink has more interesting
Ahhh... I miss AOpen. Great people, good products and some interesting ideas back in the day.

MSI still had the funkiest heatsinks.

Posted on Reply
#32
Valantar
Sabishii HitoNot to mention not being a nation of idiots that think personal rights are more important than public health and think their undedicated opinions are equally valid to those of experts in the field of medicine.
Hey, now, the inalienable right to an early and unpredictable death is sacred to Americans, whether that be by accidental gunfire from a toddler or a virus for which an effective vaccine exists. Don't look down on other cultures just because their ways seem quaint and uncivilized to you - that's rude.
Posted on Reply
#33
asdkj1740
ValantarI know Bx60 boards can't do bifurcation (that's why they never had SLI certifications after all), I was talking about allowed/possible ways of bifurcating the x16 slot. Back in the day, bifurcating down to x4 wasn't possible - I've seen that in practice on many ITX boards with people using bifurcation risers to add extra m.2 and the like. But it seems that has changed, thankfully (I would guess Intel has moved from a 2x8 controller to a 4x4 controller for the x16 slot, though that's just a guess).

As for B550 boards, in my experience people have been furious about the few boards that are audacious enough to split the PEG port into x8 + m.2 slots - from what you're saying it seems attitudes have changed. Or it might just be the whole "if AMD does something it's bad, if Intel does the same it's great" phenomenon rearing its head once again. Either way, IMO this is a good thing, but again, it doesn't relate to the core of what we were discussing here: whether it's somehow bad to "restrict" your GPU to x8 for the sake of some other (likely low-bandwidth AIC).

Yes, that is indeed a

I don't know who you're talking about when you say "we", but in PC hardware circles "PCIe switch" generally means a PLX switch, with muxes being called muxes specifically to differentiate these two very different categories of ICs, which have dramatically different funcitons. Yes, they both technically switch circuits in some way, but that's about as much as they have in common. If you're talking about a mux, call it a mux, as "switch" is not an accurate term for what it does in any frequently used meaning of the word. As for your claim that

No. The one you mention is €2.50 if you as a consumer are buying one from Mouser, but with volume pricing a full 3500-piece reel brings that down to €1.34/piece. And that's from a distributor - motherboard makers will be buying these directly from the manufacturer in even higher quantities, which means significantly lower prices. Definitely less than $1 apiece. Now, that isn't insignificant - ten of them would then mean a $10-ish BOM increase, which is deifnitely noticeable - but it's not expensive. Compared to actual PCIe switches it's nothing - a 4-lane, 4-port (i.e. up to 1-to-3 layout) PCIe 2.0 switch is €22.86 (with volume pricing bringing that down to €20 at 500 or more), with higher lane/port counts and data rates easily exceeding €100 apiece.
www.intel.com/content/dam/www/public/us/en/documents/product-briefs/z75-z77-express-chipset-brief.pdf
started from ivy bridge (on z77), intel has expanded to x8+x4+x4. dude it is like almost ten years ago...
itx pcie x16 slot is strange indeed. i tried to plug my wifi pcie card and the itx mobo cannot even recognize it.
but no, still 8+4+4 even now on z series chipset, i heard x299 can support 4+4+4+4 on the first pcie x16 slot.
what i meant is not really thank intel, amd requires only bios level settings tweak to implement 4+4+4+4 while even on eligible intel chipset mobo vendors have to change the hardware design to support 4+4+4+4.

go check b550 x570 z490 z590 and upcoming z690 product pages among big four mobo vendors, they all called mux switches.
asmedia calls asm1480 as pcie switch.
just a name lol. nowadays when ppl say pcie switch, i guess 90% ppl wont think of plx at all.

there was an asus r&d engineer who said even we may see online vishay drmos priced so cheap / very close the discrete MOSFETs, in fact to mobo manufacturers discrete mosfets are still way cheaper than drmos.
so we would never know subjected to the mass quantity mobo vendors purchase the prices of these ic.
fun fact, asus on its z490 used lots of asm1480 gen3 mux, but gigabyte and msi used gen4 mux on almost their whole lineup of z490.
also if mux are too cheap to use for them, then we shall see z590 strix e f a support 8+8 too.
Posted on Reply
#34
Why_Me
Sabishii HitoNot to mention not being a nation of idiots that think personal rights are more important than public health and think their undedicated opinions are equally valid to those of experts in the field of medicine.
Please tell me you weren't sending money to China in order to help them develop a deadly virus.
Posted on Reply
#35
Nanochip
tabascosauzAERO is just a continuation of Vision. Most likely the AERO D and G boards continue to be distinguished based on the D model having TB (TB4) and a POST code while the G does not.
That is my expectation too. But I guess we’ll know for sure in about 2 weeks or less.
Posted on Reply
#36
Valantar
asdkj1740www.intel.com/content/dam/www/public/us/en/documents/product-briefs/z75-z77-express-chipset-brief.pdf
started from ivy bridge (on z77), intel has expanded to x8+x4+x4. dude it is like almost ten years ago...
itx pcie x16 slot is strange indeed. i tried to plug my wifi pcie card and the itx mobo cannot even recognize it.
but no, still 8+4+4 even now on z series chipset, i heard x299 can support 4+4+4+4 on the first pcie x16 slot.
what i meant is not really thank intel, amd requires only bios level settings tweak to implement 4+4+4+4 while even on eligible intel chipset mobo vendors have to change the hardware design to support 4+4+4+4.
That's definitely interesting - back when I was really looking into bifurcation (mostly around 1st gen Ryzen/Kaby Lake) there was a lot of discussion about whether this was possible at all - and on most boards it simply wasn't, but there was no documentation found showing whether this was a hardware or BIOS limitation. I guess this confirms that it was just down to poor BIOS implementations from OEMs rather than a hardware issue, which makes it all the more frustrating.
asdkj1740go check b550 x570 z490 z590 and upcoming z690 product pages among big four mobo vendors, they all called mux switches.
asmedia calls asm1480 as pcie switch.
just a name lol. nowadays when ppl say pcie switch, i guess 90% ppl wont think of plx at all.
Honestly, I have no idea what you're referring to here. After scanning the product pages for a not insignificant number of Asus and ASRock boards (sorry, but after 10+ boards I can't be bothered to look at any other OEMs) there isn't a single explicit mention of PCIe switching of any kind, whether mux or PLX, only the generic "4.0x16/4.0x8+4.0x8" denominations. I'm also not finding that many boards with triple-split 4.0 setups (including high end/flagship Z590 boards) - most of them seem to have the third slot at PCIe 3.0 speeds only, which means it's connected to the chipset. There are zero mentions of the mux hardware used on product pages, spec sheets, or in manuals. If you have other examples to show, please do, but from what I've seen here it sounds you're confusing something else (things discussed in reviews?) with product pages, and if so, the wording used is on whoever wrote that, not the OEM. And reviewers and forum users are well known to often use misleading terminology to describe features.

You're right that PLX switches are rare to the point where they're almost forgotten, as the death of multi-GPU means there's no reason to add that $100+ BOM cost to motherboards any longer. But that's still what "PCIe switch" generally means - in part because muxes are generally not discussed at all. This might be changing with motherboards having more complex PCIe layouts, but this still seems quite rare.
asdkj1740there was an asus r&d engineer who said even we may see online vishay drmos priced so cheap / very close the discrete MOSFETs, in fact to mobo manufacturers discrete mosfets are still way cheaper than drmos.
so we would never know subjected to the mass quantity mobo vendors purchase the prices of these ic.
fun fact, asus on its z490 used lots of asm1480 gen3 mux, but gigabyte and msi used gen4 mux on almost their whole lineup of z490.
also if mux are too cheap to use for them, then we shall see z590 strix e f a support 8+8 too.
If anything, the use of gen4 muxes on (mostly) gen3 boards shows that cost is not much of an issue with these, which kind of undermines what you were saying earlier. Comparing muxes and PLX switches to discrete mosfets and drmos is quite misleading though - the latter is an evolution of sorts, with the more advanced component replacing already required components, while the other are two entirely different types of components (and neither is necessary for the board to function), where one can theoretically replace the other, but it does not share the same functionality, and is not a development of the same principles.

As for pricing: 60A smart power stages cost €1.37-1.70/piece for a 3000-piece reel on Mouser (no, these aren't necessarily the same parts used on motherboards, but they're the same type of product). At those price levels it doesn't take much for one of these to be cheaper than two discrete MOSFETs - and again, OEMs buy huge volumes directly from manufacturers, and pay less than this. These are, once again, not particularly expensive components - but of course, if you add 16 of them to a board, that does bump up the BOM cost quite a bit. But you're still making a very skewed comparison here.
Posted on Reply
#37
asdkj1740


not here to argue anything, just tried to show you what i have seen on product pages so far.
i understand you insist on differentiating them, and i do understand they are not the same type of ic. but honestly those are the current terms used by mobo vendors and reviewers widely.

btw for the "split" design, rare to see the third x4 goes to pcie slot (strix a z590), some got it to m.2 slot.
asrock even got a double split and the last x8 went to lan ic to provide what they called gaming lan.
some entry level msi b560 mobo does have gen4 mux used for gen3 SSD support by chipset on the same m.2 slot prewired to CPU, and msi also used the same mux to provide sata m.2 support on some low end model.
otherwise you shall see these mux used started on mid to high end models.

for the pricings things, what i tried to say is do not use the online pricing to anticipate the costs to mobo manufacturing.
i also said if those ic were that cheap we should have seen them more used on low end models.
of course what that guy (r&d) said could be not true and it was just an excuse for them to keep using poor vrm setup.
one thing for sure, plx is damn expensive.
Posted on Reply
#38
Valantar
asdkj1740

not here to argue anything, just tried to show you what i have seen on product pages so far.
i understand you insist on differentiating them, and i do understand they are not the same type of ic. but honestly those are the current terms used by mobo vendors and reviewers widely.

btw for the "split" design, rare to see the third x4 goes to pcie slot (strix a z590), some got it to m.2 slot.
asrock even got a double split and the last x8 went to lan ic to provide what they called gaming lan.
some entry level msi b560 mobo does have gen4 mux used for gen3 SSD support by chipset on the same m.2 slot prewired to CPU, and msi also used the same mux to provide sata m.2 support on some low end model.
otherwise you shall see these mux used started on mid to high end models.

for the pricings things, what i tried to say is do not use the online pricing to anticipate the costs to mobo manufacturing.
i also said if those ic were that cheap we should have seen them more used on low end models.
of course what that guy (r&d) said could be not true and it was just an excuse for them to keep using poor vrm setup.
one thing for sure, plx is damn expensive.
Hm, interesting. Is that an MSI board? Uh, yes, just spotted it in the image, lol. Can't tell what the first one is. Still, should have looked closer first instead of going through something like 30 product pages to try and find that :p Seems like they've abandoned that marketing language since Z490 though - I can't find a single mention of "PCIe switches" or muxes or anything to that effect (nor any similar illustrations) on any of their Z590 or Z690 boards. To me that reads a lot like someone in PR trying to find a consumer-friendly word for a mux, asking an engineer "but what does it do?", and then latching onto the word "switch" form an explanation along the lines of "it switches between two signal paths". Though I might of course be overthinking this.

till, that is the only time I've ever seen or heard of a mux being spoken of as a PCIe switch, and it seems to have been short-lived on their part as well. A good thing IMO - using terminology already established for something to describe another function entirely is pretty bad practice. I get that "switching" can mean many, many different things, and that a mux is something the average user is much more likely to encounter than a PLX chip, but either way, co-opting existing terminology like that is just generally a bad idea. All it does is make it more difficult to communicate clearly.

It's possible I wasn't clear before, but to be clear, I don't think I saw a single board where there was any note of the PCIex16 slot sharing bandwidth with anything but the second PCIex16 slot. There might have been some that I missed, but most of the bandwidth sharing diagrams I saw were pretty clear in their markings (with those two slots marked with "1" or "A" and the m.2 and SATA that share bandwidth with "2"/"B" and the like). I'm sure there are still some, especially among the higher end boards with crazy amounts of connectivity (likely none of them want to splurge on PLX switches for their m.2 slots).

Edit: oh, nvm, I found a mention of this on the Gigabyte Z590 Aorus Xtreme (and, after checking, the Z690 Aorus Extreme as well), so it seems Gigabyte is still doing this. I'm glad to see MSI has stopped, though. (Kind of besides the point, but I find advertising the existence of muxes for bifurcation between PCIex16_1 and _2 to be utterly weird given that this has been a standard feature on higher end boards for ... what, a decade? But I guess that's what happens when PR departments really need to sell their high end boards.)
Posted on Reply
#39
asdkj1740
ValantarHm, interesting. Is that an MSI board? Uh, yes, just spotted it in the image, lol. Can't tell what the first one is. Still, should have looked closer first instead of going through something like 30 product pages to try and find that :p Seems like they've abandoned that marketing language since Z490 though - I can't find a single mention of "PCIe switches" or muxes or anything to that effect (nor any similar illustrations) on any of their Z590 or Z690 boards. To me that reads a lot like someone in PR trying to find a consumer-friendly word for a mux, asking an engineer "but what does it do?", and then latching onto the word "switch" form an explanation along the lines of "it switches between two signal paths". Though I might of course be overthinking this.

till, that is the only time I've ever seen or heard of a mux being spoken of as a PCIe switch, and it seems to have been short-lived on their part as well. A good thing IMO - using terminology already established for something to describe another function entirely is pretty bad practice. I get that "switching" can mean many, many different things, and that a mux is something the average user is much more likely to encounter than a PLX chip, but either way, co-opting existing terminology like that is just generally a bad idea. All it does is make it more difficult to communicate clearly.

It's possible I wasn't clear before, but to be clear, I don't think I saw a single board where there was any note of the PCIex16 slot sharing bandwidth with anything but the second PCIex16 slot. There might have been some that I missed, but most of the bandwidth sharing diagrams I saw were pretty clear in their markings (with those two slots marked with "1" or "A" and the m.2 and SATA that share bandwidth with "2"/"B" and the like). I'm sure there are still some, especially among the higher end boards with crazy amounts of connectivity (likely none of them want to splurge on PLX switches for their m.2 slots).

Edit: oh, nvm, I found a mention of this on the Gigabyte Z590 Aorus Xtreme (and, after checking, the Z690 Aorus Extreme as well), so it seems Gigabyte is still doing this. I'm glad to see MSI has stopped, though. (Kind of besides the point, but I find advertising the existence of muxes for bifurcation between PCIex16_1 and _2 to be utterly weird given that this has been a standard feature on higher end boards for ... what, a decade? But I guess that's what happens when PR departments really need to sell their high end boards.)
edited: the pic above:
asrock, gigabyte
msi.

btw, msi has just explained this time 12th CPU with x16 pcie5.0 is only able to be x8+x8. NO x8+x4+x4. good old time is back lol.
it means a single gen5x4 SSD support on z690 would then "require" / "consume" X8 pcie5.0 lanes.
Posted on Reply
#40
Valantar
asdkj1740asrock, msi
gigabyte.

btw, msi has just explained this time 12th CPU with x16 pcie5.0 is only able to be x8+x8. NO x8+x4+x4. good old time is back lol.
If I were to guess, it would have required server-grade PCBs ($$$) and a ridiculous amount of retimers to trifurcate(?) PCIe 5.0 (x8 for the second slot plus another x4 for the m.2, possibly x8 due to longer traces), and they didn't want to push things that far. Given that nobody will actually meaningfully utilize any type of PCIe 5.0 connectivity for quite a few years (no, QD32 benchmarks do not count), I completely agree with that decision. Let the bandwidth fetishists fight over whichever $1000+ boards might include this (or just use some $200+ m.2 5.0 riser card). The rest of us can hope that someone still makes a sub-$200 motherboard worth buying.
Posted on Reply
Add your own comment
Dec 4th, 2024 06:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts