• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Moore Threads Unveils MTT S60 & MTT S2000 Graphics Cards with DirectX Support

I will reserve my judgement until i see independent testing but Chinese have a way of overhyping. Remember the supposed GTX 1080 performance GPU that turned out to be much slower?
So i wont hold my breath about half of these claims being true.
 
Hi,
Since they seem to have banned mining in China guess is why they didn't release any mining results :laugh:
 
Oh, no doubt, if possible (in production volumes, pricing etc.) they'll no doubt jump on that opportunity - but I'd be surprised if their government wouldn't hold back a bit still. There'd definitely be no sharing of technologies beyond the sales of finished products, that's for sure.
The sales of finished products alone is a sure way to make a lot of money (and new alliances).
 
This news is quite unexpected, though. I guess this is a sign of the things to come - new world order in which the united west will be on the loser and poorer side.
Chinese graphics could bring the much needed market competition and if the west doesn't impose a new iron curtain to block these technologies from access to itself, then things will be alright.
 
This is interesting even if it's too slow to be relevant to those of us using AMD or Nvidia GPUs. I wonder if we would ever see any independent benchmarks.


You're right. Hot Hardware mentions that
TPU should update the article to be more complete and in line with reality. Seeing the comments, it seems that people are really deluded by the idea that a company magically created a GPU out of thin air and that it will be sold outside of China.
 
What the hell is this? I'd like something Moore Legit. :wtf:
 
Imagine china creates a gpu that is 3080 level performance.. amd and nvidia would be irrelevant in china
 
Also, can someone please tell me what "LPGDDR4X" is? Is it a hybrid between LPDDR4X and some form of GDDR?

Lower Power GDDR4 "X" where the X is something simular to GDDR6 vs GDDR6X - they just have a pin here and there extra that provides a little bit more bandwidth then normal GDDR4/GDDR6.

I guess costs of production where the reason.
 
Lower Power GDDR4 "X" where the X is something simular to GDDR6 vs GDDR6X - they just have a pin here and there extra that provides a little bit more bandwidth then normal GDDR4/GDDR6.

I guess costs of production where the reason.
You're misunderstanding the question: to the best of my knowledge, there exists no RAM standard called LPGDDR4X. I can't even find any tangible mention of this type of RAM previous to this card. Which begs the question: where is this RAM coming from, who is making it, and who developed the standard? Developing a new memory standard, even one that only modifies an existing one, is a decidedly non-trivial task, as is having memory and memory controllers designed, tested and manufactured. So, is this an in-house thing? Is it done in collaboration with some memory manufacturer, and if so, who? What is the technological basis - GDDR4 (ugh), LPDDR4X (better!), or something else entirely?

Also, "a pin here and there that provides a ltitle bit more bandwidth than normal GDDR4/GDDR6" is such an absurdly simplistic statement that ... well, what's the point? Adding "pins" means adding I/O to both the memory die and the controller, both of which are non-trivial. And grouping GDDR4 and GDDR6 as if they have even remotely comparable performance? Even if you meant LPDDR4X rather than GDDR4, that is still a drastically different technology than GDDR6. GDDR and LPDDR standards also have broadly different design goals - GDDR is bandwidth at all costs, even if efficiency suffers, and latency is crap; LPDDR is low power first, then latency, then bandwidth. These are not compatible technologies, and somehow mixing the two would be anything but trivial.

The simplest explanation is for "LPGDDR4X" to be a high-clocked, latency-be-damned, higher power binning of regular LPDDR4. That would work with established PCB design tools and (likely) memory controllers and memory chips too - you'd just need to bin things and ensure signal integrity and data corruption stays at acceptable levels. But given that no information about this exists, at least that I can find, this is an open question.
 

It's right here.

Just a lower powered LPDDR4 variant. LPDDR4 is a memory standard designed for mobile devices.

I mean why should they stick to standards, it's china. Not USA.
 

It's right here.

Just a lower powered LPDDR4 variant. LPDDR4 is a memory standard designed for mobile devices.

I mean why should they stick to standards, it's china. Not USA.
But that's LPDDR4X, not LPGDDR4X. There is no mention of LPGDDR4X in that link. Have you not been reading carefully?
 
Whats so different for LPDDR4 vs LPGDDR4 ?

It's not a fancy new standard; it's an older and existing one, proberly to cut costs. Otherwise they could have gone the GDDR5/GDDR6(x) route.
 
Whats so different for LPDDR4 vs LPGDDR4 ?

It's not a fancy new standard; it's an older and existing one, proberly to cut costs. Otherwise they could have gone the GDDR5/GDDR6(x) route.
.... Sigh. GDDR and LPDDR are completely different standards. They are fundamentally incompatible, with different signalling, requiring different controllers and different DRAM die designs. Yet the name of this implies that this is an "LPGDDR" RAM, not an LPDDR or GDDR. Nothing called LPGDDR4X (or any other form of LPGDDR) seems to have existed in any significant capacity before this announcement. At all. Ever. The answer to the question "what's so different between LPDDR4 vs LPGDDR?" is that one is an existing standard and widely used type of memory, while the other seems to not have existed whatsoever until now, and adopts a strangely mixed naming scheme (mixing LPDDR and GDDR) that has never been used before. Hence my questions of what exactly this RAM is, and what it's based on. Is that so hard to grasp? New, unknown thing is new and unknown, and thus raises questions.

It's of course entirely possible that this is just regular old LPDDR4X and they're adding a G because it's being used on a GPU, but... that's not how standards work. LPDDR4X is LPDDR4X whether it's used with a CPU, GPU, accelerator or time-warping roomba - it's the same thing regardless of the application. A pear does not transform into an apple if you bake it into a pie. But incompetence and PR shenanigans can never be ruled out completely, of course. So that's possible, but until confirmed (or disproven), that can't be assumed either.
 
.... Sigh. GDDR and LPDDR are completely different standards. They are fundamentally incompatible, with different signalling, requiring different controllers and different DRAM die designs. Yet the name of this implies that this is an "LPGDDR" RAM, not an LPDDR or GDDR. Nothing called LPGDDR4X (or any other form of LPGDDR) seems to have existed in any significant capacity before this announcement. At all. Ever. The answer to the question "what's so different between LPDDR4 vs LPGDDR?" is that one is an existing standard and widely used type of memory, while the other seems to not have existed whatsoever until now, and adopts a strangely mixed naming scheme (mixing LPDDR and GDDR) that has never been used before. Hence my questions of what exactly this RAM is, and what it's based on. Is that so hard to grasp? New, unknown thing is new and unknown, and thus raises questions.

It's of course entirely possible that this is just regular old LPDDR4X and they're adding a G because it's being used on a GPU, but... that's not how standards work. LPDDR4X is LPDDR4X whether it's used with a CPU, GPU, accelerator or time-warping roomba - it's the same thing regardless of the application. A pear does not transform into an apple if you bake it into a pie. But incompetence and PR shenanigans can never be ruled out completely, of course. So that's possible, but until confirmed (or disproven), that can't be assumed either.
I'm thinking intentional typo, to mislead maybe.

Oh I just saw this, and I was guessing before. :D

On 14 March 2012, JEDEC hosted a conference to explore how future mobile device requirements will drive upcoming standards like LPDDR4.
 
I'm thinking intentional typo, to mislead maybe.

Oh I just saw this, and I was guessing before. :D
Yeah, that's the "it's LPDDR4X, but on a GPU, so we call it LPGDDR4X" explanation. It's definitely the simplest one, requiring the lowest number of new assumptions (no new tech required, only assuming marketing to be misleading, incompetent, or both), so it passes Occam's razor. Doesn't meant its the correct answer, but definitely the most likely in a vacuum.
 
Indeed. If there's one thing US/EU sanctions are good for, then it's urging Chinese/Russian economies to innovate and become independent from the West. The fact that a never-heard-of Chinese company can create and arrange manufacturing for a 12 nm GPU out of thin air in just 18 months while Intel has been struggling with Arc for years and years only proves that our sanctions are actually backfiring on us.

I can see the meme coming soon:
Intel/Nvidia/AMD: "We're cutting supplies to Russia and China. Let's see them descend back to the Middle Ages."
China/Russia: "Hold my beer."

It's easy to reverse engineer tech when everything is manufactured a stones throw away.
 
It's easy to reverse engineer tech when everything is manufactured a stones throw away.
I think you're kind of underestimating the complexity in reverse-engineering massive chips with billions and billions of transistors made on cutting-edge manufacturing nodes. It's not like you can tell the function of a transistor or group of transistors by eye, evne with a scanning electron microscope, and you can't really probe nanometer-scale lithographic features either, at least not at that scale. Figuring out large-scale structures is likely feasible, not least because those things are discussed at chip design conferences and are often based on publicly funded university research that is published in academic journals, but getting to a level where you can actually make your own version? That's a massive undertaking. It's far more likely that they've hired engineers with existing knowledge of how these things can work and tasked them with designing slightly tweaked versions of this.
 
I think you're kind of underestimating the complexity in reverse-engineering massive chips with billions and billions of transistors made on cutting-edge manufacturing nodes. It's not like you can tell the function of a transistor or group of transistors by eye, evne with a scanning electron microscope, and you can't really probe nanometer-scale lithographic features either, at least not at that scale. Figuring out large-scale structures is likely feasible, not least because those things are discussed at chip design conferences and are often based on publicly funded university research that is published in academic journals, but getting to a level where you can actually make your own version? That's a massive undertaking. It's far more likely that they've hired engineers with existing knowledge of how these things can work and tasked them with designing slightly tweaked versions of this.
What about the fact that these chips are mostly made in China and Taiwan? China is still a communist state - what the government says there is holy. If they order their foundries and factories to disclose information on the chips they make, then it's easy to make new designs using the existing ones as templates. We don't even need to know about these transactions of information. Everything can happen in secret.

I'm not saying that this is what's actually going on - I'm just thinking out loud. :ohwell:
 
What about the fact that these chips are mostly made in China and Taiwan? China is still a communist state - what the government says there is holy. If they order their foundries and factories to disclose information on the chips they make, then it's easy to make new designs using the existing ones as templates. We don't even need to know about these transactions of information. Everything can happen in secret.

I'm not saying that this is what's actually going on - I'm just thinking out loud. :ohwell:
It's absolutely possible for them to steal design plans, but those designs still don't necessarily teach you how the part works, what the various parts of the die do, how the interconnects work, etc. And even if you had all of that, taking a design made for one production node and making it work on an entirely different one is also an extremely complex process. It is entirely possible that something like this has happened (though I'd be surprised if we didn't then at least hear some accusations made), I was just saying that the claim I responded to was dramatically underestimating the difficulty of doing this.
 
CUDA support?

I guess that's what you get when you are using Chinese fabs to make your GPU chips... nvidia.
It's absolutely possible for them to steal design plans, but those designs still don't necessarily teach you how the part works, what the various parts of the die do, how the interconnects work, etc.
If your engineers are the ones that are working in those fabs, it's easy to understand what's going on.
 
Last edited:
CUDA support?

I guess that's what you get when you are using Chinese fabs to make your GPU chips... nvidia.

If your engineers are the ones that are working in those fabs, it's easy to understand what's going on.
While they're called cuda cores they're not that special, Cuda is a software API that could reasonably be hardware agnostic, but is proprietary.
Looking at chip design's wouldn't divulge Cudas secrets.

Imagination technology however night be involved:/
 
Why is this chip / card not in database?
 
Back
Top