Tuesday, March 8th 2016

Dune Case is the Mac Pro Lookalike That Everyone Can Have

Dune Case is a premium small form-factor PC case that should instantly remind you of the latest Apple Mac Pro, but packing whatever hardware (or software) you wish it had, in its 260 mm x 215 mm x 215 mm dimensions. Its designer is raising $130,000 on Kickstarter to get it to the shelves, and has already raised half the amount, with 8 days of fundraising to go. You can back the project from this page. The aluminium-built case comes in rose gold and black.

Unlike the Mac Pro, which features a cylindrical design with key components arranged along three sides of a triangular structure, with cooling machinery taking up its center; the Dune Case takes a different approach. It's spit into three compartments vertically along the bore of the cylinder, with the center holding a mini-ITX motherboard, one side with room for a full-height yet under 185 mm graphics card (connected to the motherboard through a PCIe riser); and the third compartment holding two 2.5-inch drives. A custom rear panel plugs into your motherboard, and an SFX power supply (not included) fuels your build. The main air channel is a 140 mm spinner that spouts warm air from the top.
Add your own comment

68 Comments on Dune Case is the Mac Pro Lookalike That Everyone Can Have

#51
chlamchowder
cdawallLowering from boost clocks isn't throttling. I am talking throttling, flat line 300mhz 2D clocks is throttling stepping out of boost is not. All more dust means is it will step out of boost faster.
That I disagree on. When you buy a chip with a turbo or boost speed of 3.8 GHz, you are paying for a chip binned to run properly at that frequency. If the chip isn't running at the frequency it's guaranteed to reach and be stable at because of thermal restrictions, I'd call that throttling.

Modern chips do throttling more elegantly and gradually than older chips that just take a massive hit in performance (by dropping to the lowest multiplier or inserting idle cycles).
cdawallI have 7950's the upper card runs in the 90's the lower in the 80's they have been for years. ... 80C is NORMAL for a GPU.
80 C is fine. 84-85 C is also fine. But I'd personally prefer to have my video cards and processors not be in the 90s range. Besides being a high temperature there's not much headroom before they'll start reducing clocks.
Posted on Reply
#52
cdawall
where the hell are my stars
chlamchowderThat I disagree on. When you buy a chip with a turbo or boost speed of 3.8 GHz, you are paying for a chip binned to run properly at that frequency. If the chip isn't running at the frequency it's guaranteed to reach and be stable at because of thermal restrictions, I'd call that throttling.
You would be wrong according to intel, AMD and nvidia, what you think and what the real world says are two different things. You are paying for a CPU that can run at 3.2ghz and peak to 3.8ghz under specific conditions.
Posted on Reply
#53
chlamchowder
cdawallYou would be wrong according to intel, AMD and nvidia, what you think and what the real world says are two different things. You are paying for a CPU that can run at 3.2ghz and peak to 3.8ghz under specific conditions.
Right. According to Intel, AMD, and Nvidia, that highest speed is not guaranteed and can only be reached under specific conditions. But they're only labeling the chips that way so they can specify a lower TDP in the spec sheet. Then, OEMs can sell the chips in devices where the cooling can't keep up (ultrabooks, tablets), and not look bad doing it.

Think about what Intel has to do to sell a chip that runs "up to 3.8 GHz". They have to make sure that chip is rock solid at 3.8 GHz at stock voltage. Otherwise people are going to hit problems when the chip turbos to that speed, much like what happens if you overclock too far. I get that throttling has a bad connotation and that Intel/AMD/Nvidia would prefer to avoid it. But in the end, if the chip clocks down due to thermal constraints, you've paid for a chip binned to hit 3.8 GHz but not getting that speed.
Posted on Reply
#54
cdawall
where the hell are my stars
chlamchowderRight. According to Intel, AMD, and Nvidia, that highest speed is not guaranteed and can only be reached under specific conditions. But they're only labeling the chips that way so they can specify a lower TDP in the spec sheet. Then, OEMs can sell the chips in devices where the cooling can't keep up (ultrabooks, tablets), and not look bad doing it.

Think about what Intel has to do to sell a chip that runs "up to 3.8 GHz". They have to make sure that chip is rock solid at 3.8 GHz at stock voltage. Otherwise people are going to hit problems when the chip turbos to that speed, much like what happens if you overclock too far. I get that throttling has a bad connotation and that Intel/AMD/Nvidia would prefer to avoid it. But in the end, if the chip clocks down due to thermal constraints, you've paid for a chip binned to hit 3.8 GHz but not getting that speed.
The VID goes up in turbo mode clown.
Posted on Reply
#55
chlamchowder
cdawallThe VID goes up in turbo mode clown.
I think we both understand that VID scales with clock frequencies. Before you resort to name calling after making a misguided inference (that I'm implying the stock frequency stays at a fixed value regardless of frequency), please ask to clarify your misunderstandings. Perhaps it was my mistake in not making it abundantly clear that stock voltage is variable, and assuming that was well understood because chips have been doing that for years. But I would still prefer that the discussion remain civil. If you're confused, please ask. I don't bite :)

To reiterate my previous point and clarify, the manufacturer could pick a crazy high stock voltage for turbo (AMD does that sometimes) so more chips fit into the "can stay stable at 3.8 GHz" bin. That doesn't matter at all. I still don't want to pay for a chip that is binned to run at 3.8 GHz and not get that speed due to thermal constraints.

To clarify further, take the silicon lottery site for example. If you paid for a chip they tested to run at 4.5 GHz, and then ended up running it at 3.5 GHz because of thermal constraints, is that a good use of the extra money?
Posted on Reply
#56
PP Mguire
Dude, turbo/GPU boost is not throttling. It's upping clocks under heavy load to gain better performance for a short period of time. That's why it's called BOOST. Thermal throttling is when the software can't manage the high temps to keep them reasonable and drop them BELOW stock clocks. Boost/turbo is not absolute.
Posted on Reply
#57
chlamchowder
PP MguireDude, turbo/GPU boost is not throttling. It's upping clocks under heavy load to gain better performance for a short period of time. That's why it's called BOOST. Thermal throttling is when the software can't manage the high temps to keep them reasonable and drop them BELOW stock clocks. Boost/turbo is not absolute.
Alright, if we don't want to call it throttling, then call it "downclocking due to thermal constraints". Perhaps that's more accurate and consistent with Intel/AMD/Nvidia's marketing materials. Either way, performance is being lost. And it's not like overclocking, which is never guaranteed. If a processor is specified to boost to 3.8 GHz, there's no guarantee you'll hit 4.0 GHz. But the chip is guaranteed to hit 3.8 GHz.

I prefer a system in which the chip in question doesn't downclock due to thermal constraints. Is that fair?
Posted on Reply
#58
PP Mguire
Except it's not downclocking due to thermal constraints. They aren't guaranteed to operate at turbo or boost frequencies 24/7. That's why it's called turbo/boost, like I said in the post above. In the case of an Intel processor, it will only boost for as long as needed then go back down. For Nvidia GPUs, they boost for as long as they can or are needed. They aren't meant to run at that top frequency indefinitely.
Posted on Reply
#59
cdawall
where the hell are my stars
My cobra can do 160mph. It cannot do 160mph for its entire life. Boost works the same way.

As for the vid I don't think you realize how high the 9370/9590 go...
Posted on Reply
#60
PP Mguire
cdawallMy cobra can do 160mph. It cannot do 160mph for its entire life. Boost works the same way.

As for the vid I don't think you realize how high the 9370/9590 go...
You uh, drive that thing in "Mexico"?
Posted on Reply
#61
cdawall
where the hell are my stars
PP MguireYou uh, drive that thing in "Mexico"?
I have tickets that say us unluckily. It'll go faster but I only have so much open road.
Posted on Reply
#62
chlamchowder
it will only boost for as long as needed then go back down
Again we are getting confused. There's a difference between dropping clocks and voltages when the chip is idle (when there's no workload to slow down), and dropping clocks/voltages when the user is loading the chip and waiting for it. What I'm talking about here is reducing clocks and voltages to prevent temperatures from exceeding a certain threshold while under load, reducing performance. Let's leave downclocking during idle out of the discussion.
cdawallI have tickets that say us unluckily. It'll go faster but I only have so much open road.
The car comparison is poor here for too many reasons to list. For starters, sustaining maximum frequency is much more common for GPUs and CPUs. People run video games for hours every day, which puts the GPU at full load. Code compilation, rendering, image processing, simulations, some games, and even some badly designed web pages will put the CPU under sustained load. Almost nobody will hit their car's maximum rated speed. Every computer user will hit their CPU's maximum rated speed. And a lot of "power" users run workloads that will keep the CPU at that speed if surrounding components don't power limit or thermally limit the CPU. I've seen Intel CPUs draw way over their rated TDP at turbo, so I don't think the power limit (if any) is an issue. Usually I see clocks drop when the CPU hits some thermal threshold (upper 80s for a Surface Pro 3, 99 C for my older HP laptop).

The VID used has absolutely zero relevance to the discussion. Can we agree on that? I don't care if AMD shoves 3 V through the CPU to make it stable at that speed, as long as it lasts. What matters is that the processor is binned to be stable at that frequency, and be stable there for a really long time even under max load. If it's not, it should be RMA-ed. And if Intel/AMD/Nvidia put out a chip that became unstable at boost frequencies after being pushed there for sustained workloads, there'd be a ton of RMAs.

But yes, I get your point - Intel, AMD, and Nvidia do not guarantee turbo/boost clocks will be sustained (so those chips can be used by OEMs in designs unable to cool them if the chip was continuously run at the speed it was binned to be stable at). My personal gripe is just with paying for a chip guaranteed to reach a certain frequency, and then not using it there. I understand you guys are fine with that. I guess we should agree to disagree here.
Posted on Reply
#63
cdawall
where the hell are my stars
The car comparison is fair car can do xxx speed and cannot do it forever. Cpus are the same way. Even in a server environment with no noise constraints and good cooling there is no cpu or gpu that will stay in boost the entire time. The only thing these cpus are guaranteed to do is those speeds typically across a limited number of cores for a limited amount of time. Hence the name of boost and not standard clock.
Posted on Reply
#64
PP Mguire
chlamchowderAgain we are getting confused. There's a difference between dropping clocks and voltages when the chip is idle (when there's no workload to slow down), and dropping clocks/voltages when the user is loading the chip and waiting for it. What I'm talking about here is reducing clocks and voltages to prevent temperatures from exceeding a certain threshold while under load, reducing performance. Let's leave downclocking during idle out of the discussion.

The car comparison is poor here for too many reasons to list. For starters, sustaining maximum frequency is much more common for GPUs and CPUs. People run video games for hours every day, which puts the GPU at full load. Code compilation, rendering, image processing, simulations, some games, and even some badly designed web pages will put the CPU under sustained load. Almost nobody will hit their car's maximum rated speed. Every computer user will hit their CPU's maximum rated speed. And a lot of "power" users run workloads that will keep the CPU at that speed if surrounding components don't power limit or thermally limit the CPU. I've seen Intel CPUs draw way over their rated TDP at turbo, so I don't think the power limit (if any) is an issue. Usually I see clocks drop when the CPU hits some thermal threshold (upper 80s for a Surface Pro 3, 99 C for my older HP laptop).

The VID used has absolutely zero relevance to the discussion. Can we agree on that? I don't care if AMD shoves 3 V through the CPU to make it stable at that speed, as long as it lasts. What matters is that the processor is binned to be stable at that frequency, and be stable there for a really long time even under max load. If it's not, it should be RMA-ed. And if Intel/AMD/Nvidia put out a chip that became unstable at boost frequencies after being pushed there for sustained workloads, there'd be a ton of RMAs.

But yes, I get your point - Intel, AMD, and Nvidia do not guarantee turbo/boost clocks will be sustained (so those chips can be used by OEMs in designs unable to cool them if the chip was continuously run at the speed it was binned to be stable at). My personal gripe is just with paying for a chip guaranteed to reach a certain frequency, and then not using it there. I understand you guys are fine with that. I guess we should agree to disagree here.
Your understanding of what's what is wrong and after 2 pages or so of constant explaining you still don't want to understand the difference and want to keep iterating the same stuff back without actually figuring out you're complaining about the wrong thing. So yea, we'll agree to disagree because frankly I don't care if you don't want to properly understand the difference between boost/turbo and thermal throttling and the fact that the chips are doing exactly as they are designed to do.
Posted on Reply
#65
alucasa
Hmm, it's quite interesting how someone mixes up thermal throttling and turbo. No offense meant, I just find his view fairly interesting although wrong.
Posted on Reply
#66
chlamchowder
alucasaHmm, it's quite interesting how someone mixes up thermal throttling and turbo. No offense meant, I just find his view fairly interesting although wrong.
The difference is literally what the marketing department says about it. That's why I wanted to use the term "downclocking due to thermal constraints" instead so there's no conflict with marketing materials. Let's take the term "throttling" out of the discussion because it's controversial. The crux of the issue is whether you're happy with the advertised performance baseline, or whether you want to get what the chip was binned for. I prefer the latter because binning is mainly what drives chip cost, and I feel like I'm not getting what I paid for when clocks are reduced for whatever reason. There are people who move even further and want to get the maximum the chip is capable of by overclocking.

Turbo works like this: Intel tests a processor and it passes for running at 3.8 GHz. However, it pulls 56 W when fully loaded at that speed. OEMs say "our designs can't cool a 56 W chip. we can deal with 47 W though." Intel responds by saying "no problem, we'll put a 2.8 GHz sticker on it because we know it won't draw over 47 W at that speed. We'll still let it hit 3.8 GHz under load and it will draw 56 W when it does so a higher 'boost' clock can be advertised and bursty workloads benefit, but if the workload is sustained it'll just hit 99 C, reduce clocks to keep itself from going past that, and probably sustain somewhere near 2.8 GHz provided your cooler deals with 47 W. That's fine because we never guaranteed 3.8 GHz."

That's what I observed from a HP laptop I owned. When using all four cores it consistently drew 55-57 W according to intel power gadget, until it hit 99 C and clocks started dropping. The chip was an i7-4900MQ.

Are we on the same page now? :)
Posted on Reply
#68
Caring1
Bugger, I actually liked the look and preferred it to the MSI version which uses MXM GPU's and doesn't look as neat.
Posted on Reply
Add your own comment
Apr 2nd, 2025 14:46 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts