Lexar is back with another RAM series for its DDR5 lineup. Paired with primary timings that will benefit both Intel and AMD, Lexar THOR memory is targeting gamers with its DDR5-6000 CL 32 XMP & EXPO profiles. Follow along as we take this memory for a test drive!
NVIDIA's GeForce RTX 3080 Ti is the green team's answer to AMD's recent Radeon launches. In our testing, this 12 GB card basically matches the much more expensive RTX 3090 24 GB in performance. The compact dual-slot Founders Edition design looks gorgeous and is of amazing build quality.
@3valatzy nothing is directly comparable. I lock the CPU freq, use different drivers and different windows version. We are using the same benchmark scene, but I'm not sure about the graphical settings. So At max settings, its semi comparable because your GPU bound, but I pointed out, these aren't the same system and game config and shouldn't be treated as such.
Below are a better representation of Ryzen 5800X vs Ryzen 7950X since they came from my testing. The difference now is newer drivers and different OS. Also Cyberpunk got a major FPS boost with the from launch to 1.5v. Now 2.1 is out and that brings more FPS.
@ir_cow
Since you mentioned High as a preset, I assume that as soon as heavier graphics settings and/or higher resolutions come into play the difference shrinks to a point of being basically margin of error? Or would the 1/0.1% lows still reflect something meaningful?
@ir_cow
Since you mentioned High as a preset, I assume that as soon as heavier graphics settings and/or higher resolutions come into play the difference shrinks to a point of being basically margin of error? Or would the 1/0.1% lows still reflect something meaningful?
On Intel, the margin is 1-3% pretty much regardless of the speed (4800+) if you go high enough on the settings / resolution. AMD however, really likes Ratio 1:1. So if you switch to 2:1 for 6600+, it is worse than just running 4800 JEDEC in the 1% lows and sometimes the average FPS too (game dependent).
Intel
AMD:
Pit "Loose" 6000 CL42 vs 6200 (Super Tweaked), but still in 1:1, the margin is small at higher resolutions where you are "GPU bound"
@ir_cow
I see. Would it be possible to maybe include a game or two that are (potentially) more memory sensitive into the benches? AAA titles are mostly GPU bound, that’s well known, but I am curious if something like a Grand Strategy title, like a HoI4 or CK3, might be more sensitive to memory. Late game Civ or Old World, if we go 4X, as another option. I understand that’s a lot of work, but might be somewhat interesting. Another option is Overwatch, since I remember that the original game was weirdly memory sensitive. Not sure if the update with OW2 displays similar behavior though.
Yes, thanks for detailed review. Not bad ram, and looks better than most of rivals. But at this point, as Phil has mentioned, G.Skill Flare X is much better contender, due to not very big performance difference, but massively smaller footprint. I'm not a RAM guy, but Hynix A-dies seems like more stable.
Sorry for a bit of rant here, and I might be wrong... but. I wonder why no DRAM module maker, does not put any thermal pads on PMIC. This seems to be the hottest component on PCB. I've seen some info across the web, that putting some might help in RAM OC and stability.
One question come to mind- is a lazy habit from past PMIC-less RAM generations, or just money saving? I don't see any point in such huge RGB holder heat spreader, if it doesn't serve the main purpose- to cool all components. And in this case it becomes a heat trap, as the entire top is closed, with no hot air exhaust.
I'm sorry for deraling the thread. But does anybody know, is there any significant difference in SKU, and IC quality between Trident Z5 Neo (F5-6000J3040G32GX2-TZ5N) and Flare X5 (F5-6000J3040G32GX2-FX5). Is there any more serious binning for Trident Z line, or all the difference is the heatspreader? I'm leaning toward later, but I'm afraid the top binned chips are yet again reserved for rubbish and ugly RGB driven products with unreasonably "tall as the wall" heatsinks. Thank you for your input.
Sorry for a bit of rant here, and I might be wrong... but. I wonder why no DRAM module maker, does not put any thermal pads on PMIC. This seems to be the hottest component on PCB. I've seen some info across the web, that putting some might help in RAM OC and stability.
One question come to mind- is a lazy habit from past PMIC-less RAM generations, or just money saving? I don't see any point in such huge RGB holder heat spreader, if it doesn't serve the main purpose- to cool all components. And in this case it becomes a heat trap, as the entire top is closed, with no hot air exhaust.
Some DRAM Vendors do put thermal pads on the PMIC. It does cost money to add a pad and if you are at say 1.3v or lower, it probably will never matter over the lifespan on the memory. Add in the fact that they almost all give lifetime warranties right now, it becomes a pointless augments. For DDR1-DDR4 the power was stepped down on the motherboard itself. Those bucks never had thermal pads either.
When DDR5 launched, one vendor rep told me its company suicide not to have a thermal pad because that is what the R&D told him. That company just so happened to have thermal pads on the PMIC (get my drift. Talked to a few other vendors (unofficially of course) and they all say it isn't a concern at lower voltage (aka XMP profile). Those Vendors didn't have thermal pads on some of the product lines.
Looking into the PMIC myself, you can find the datasheets floating around the web. The one I found has the maximum Junction temp of 155°C and normal range of 0-85°C and the bucks are about the same too (hard to find part numbers on those). Other things to consider is that the memory will error out well before you reach that temperature threshold. I personally like to see everyone put a pad on because it is just going to prolong the part life, but realistically unless you pushing high voltages daily, it matters not.
I'm sorry for deraling the thread. But does anybody know, is there any significant difference in SKU, and IC quality between Trident Z5 Neo (F5-6000J3040G32GX2-TZ5N) and Flare X5 (F5-6000J3040G32GX2-FX5). Is there any more serious binning for Trident Z line, or all the difference is the heatspreader? I'm leaning toward later, but I'm afraid the top binned chips are yet again reserved for rubbish and ugly RGB driven products with unreasonably "tall as the wall" heatsinks. Thank you for your input.
Historically Neo / TridentZ is a better binned from Flare / Ripjaw. But you can buy the exact same bin and primary timings from the Flare / Ripjaw. They just don't have RGB elements.
@ir_cow
I see. Would it be possible to maybe include a game or two that are (potentially) more memory sensitive into the benches? AAA titles are mostly GPU bound, that’s well known, but I am curious if something like a Grand Strategy title, like a HoI4 or CK3, might be more sensitive to memory. Late game Civ or Old World, if we go 4X, as another option. I understand that’s a lot of work, but might be somewhat interesting. Another option is Overwatch, since I remember that the original game was weirdly memory sensitive. Not sure if the update with OW2 displays similar behavior though.
I watched a video of Heart of Iron IV from 7 years ago, it runs just fine. Same with Crusaders Kings III (3 years). These strategy games look to be much more CPU limited than anything else. Since I don't play these games either, I have no idea what a appropriate benchmark scenario would be. Also it would need to be replicable so the benchmark is the same per test run.
As for Overwatch or any online only game, unfortunately that isn't possible outside a single article. Every time a update dropped, everything would have to retested and the previous data is invalidated. Also how do you measure game performance when every match is unique? You could get some rough numbers, but nothing would be directly comparable.
When these products are listed in Stores, sellers do not specify this important thing, so people have to google it, u make it ez to them, by putting it to the Specifications table.
Also we can see the IC Manufacturer and the Form Factor on this photos, but still we have them in the Spec Table.
the voltage information regarding the Lexar Thor Kit in this review is not correct. It is 1.35V, not 1.3V.
I asked Lexar via mail about it. This is the answer:
To clarify, the correct voltage for this specific memory kit is 1.35V, as specified on our official product website and in the product documentation.
The 1.3V voltage you encountered in early reviews or on some comparison platforms might be due to different testing setups or variations in the configurations used during those tests. This can sometimes result in slightly different voltage values being reported, even though the officially supported voltage is 1.35V.