• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

NVIDIA Names Fermi Architecture Successors

Sonabitch. I'm a Kepler.

[loop]
Get a perverted feeling of glee when nVidia fails.
But don't want to see my namesake tarnished.
[/loop]
 
Well at least they have announced something. I cant wait to see how this pans out honestly.
 
I propose to make a graph that shows the exponential probability of me going camping/fishing tomorrow VS the probability of going to a strip club and burning a few hundred.


Either way I win, and Nvidia is still making up slides to keep optimism up on their hardware, they have no CPU's, chipsets are going the way of the dodo, GPU's are moving into CPU's.......



I think there will always be a place for dedicated GPU hardware, but unfortunately the need keeps decreasing as the majority of users find the better and better offerings of integrated the best solution, and they are getting better.


There is no good solution to this problem, Intel would have the shit sued out of them for buying Nvidia, AMD couldn't swing it if they would even consider it. There are no other companies other than VIA, and that option has been pursued before, but Intel threatened to pull the X86 license if that happened.
 
Look at the Intel fanbois cry.


Does it hurt that AMD provided IP tech to your precious leader so they didn't suck so bad? Intel talked about it, AMD did it.

Haters gonna hate.


Nfags can leave this thread, I hope Nvidia doesn't drop their prices and you continue to get assraped by them.

Don't really trust the stuff you say Steevo. You're covered in AMD semen.:laugh:

Lets hope Kepler is good, as of now all we can do is speculate, it's only a graph and nothing more.
 
So I'm NOT going camping? Dammit!!!!!!

I'm not bashing Nvidia, I use their cards too. I have more Intel machines I have built than AMD.


I am merely saying both major players in CPU's are putting GPU's on the same die, Nvidia does not have that option. With CPU's also taking up northbridge chipsets and memory controllers, nvidia will not have a place for chipsets much longer.


So if they are unable to sell chipsets, on board graphics, notebook graphics, and that being the majority of the market. They are more than likely going to become a HPC specialized market with a spinoff of GPU offerings for the masses.


No, I'm not covered in AMD semen, I took a bath.
 
So I'm NOT going camping? Dammit!!!!!!

I'm not bashing Nvidia, I use their cards too. I have more Intel machines I have built than AMD.


I am merely saying both major players in CPU's are putting GPU's on the same die, Nvidia does not have that option. With CPU's also taking up northbridge chipsets and memory controllers, nvidia will not have a place for chipsets much longer.


So if they are unable to sell chipsets, on board graphics, notebook graphics, and that being the majority of the market. They are more than likely going to become a HPC specialized market with a spinoff of GPU offerings for the masses.


No, I'm not covered in AMD semen, I took a bath.

Well the only reason they aren't integrating their chips in is because no one bought them out like ATI, Why waste money if you're Intel to buy up GPU's from Nvidia to integrate in your chip if you are capable of making it on your own for cheap? Intel knows the market for those buying the solution.(most mainstream non gamers).

I see what you mean though, that if everything is being integrated into a single chip, then what would be the need for a company like Nvidia. But who knows, maybe in the future we will see a deal between Intel and Nvidia to help Nvidia get to that market. But even saying that, there is still a good amount of money to be made int he desktop market, so Nvidia still has that to hang onto while as Fusion and what not are going on, they can get prepared for it themselves.
 
I think they make way$$$$$ more by getting CUDA in HPC clusters. But that begs the question, how long before such tech is the basis for the next PC?


I hope they succeed, the idea of a whole PC with the CPU being the low end processor for menial tasks is exciting!
 
I don't think they want CPU exist in PC.

Yes... I mean a PC that run with only a GPU.
 
Despite % market share of discreet graphics has decreased a bit, actual shipments grow every year.

There's always going to be a place for discreet GPUs, putting a performance CPU or GPU each on their own die and making them yield is already hard enough, a CPU+GPU chip will never be able to compete with discrete solutions. Even the "almighty" Fusion and Sandy Bridge are crap. SB can barely beat a HD5450, Zacate is not much faster (needs to be 20 times faster to pose a danger for upcoming GPUs and it's not 20x times faster). CPU+GPU will always be an entry/mainstream thing, anyone wanting to actually play something will always use discrete GPUs. Only casual gamers will use integrated and that already happens nowadays so nothing will changein reality. Casual games will be created with slightly better graphics and/or WoW 2 or it's spiritual successor freaks will play it at 30+ fps instead of at 15 fps that's all.

Contrary to what people think budget/mainstream cards are NOT the most used graphics cards, at least amongst Steam users:

http://store.steampowered.com/hwsurvey/videocard/

Popcap casual games sell shittons more copies on Steam than any other games so I would say that there must be a fair ammount of casual gamers on Steam, yet the most used cards are high-end.
 
at last they responds to perfomance /watt ratio they'd better introduce midrange cards first
 
Despite % market share of discreet graphics has decreased a bit, actual shipments grow every year.

There's always going to be a place for discreet GPUs, putting a performance CPU or GPU each on their own die and making them yield is already hard enough, a CPU+GPU chip will never be able to compete with discrete solutions. Even the "almighty" Fusion and Sandy Bridge are crap. SB can barely beat a HD5450, Zacate is not much faster (needs to be 20 times faster to pose a danger for upcoming GPUs and it's not 20x times faster). CPU+GPU will always be an entry/mainstream thing, anyone wanting to actually play something will always use discrete GPUs. Only casual gamers will use integrated and that already happens nowadays so nothing will changein reality. Casual games will be created with slightly better graphics and/or WoW 2 or it's spiritual successor freaks will play it at 30+ fps instead of at 15 fps that's all.

Contrary to what people think budget/mainstream cards are NOT the most used graphics cards, at least amongst Steam users:

http://store.steampowered.com/hwsurvey/videocard/

Popcap casual games sell shittons more copies on Steam than any other games so I would say that there must be a fair ammount of casual gamers on Steam, yet the most used cards are high-end.


You are right, almost 7% of players choose Intel branded crap graphics.

Most have 1280X1024 resolution

XP 32 bit with a dual core, 2GB of Ram.


Sounds like the typical wal-mart machine to me. So if Wal-mart started selling Intel and AMD systems with built in GPU, that could perform well. I imagine that number would rise.
 
Well at least they have announced something. I cant wait to see how this pans out honestly.
trust me they will make some surprise when it come out. such killer product will feature 768bit bus(or 1024bit) + 7GT GDDR5 ram and 1344cuda/224TMU/96rops(or 128rops). believe me they will do it if they want to out pace cayman or even the incoming southern island series.
 
Last edited:
Well the only reason they aren't integrating their chips in is because no one bought them out like ATI, Why waste money if you're Intel to buy up GPU's from Nvidia to integrate in your chip if you are capable of making it on your own for cheap? Intel knows the market for those buying the solution.(most mainstream non gamers).

I see what you mean though, that if everything is being integrated into a single chip, then what would be the need for a company like Nvidia. But who knows, maybe in the future we will see a deal between Intel and Nvidia to help Nvidia get to that market. But even saying that, there is still a good amount of money to be made int he desktop market, so Nvidia still has that to hang onto while as Fusion and what not are going on, they can get prepared for it themselves.

but the problem is, intel graphic was suck, thats why AMD buy ati, they don't want to risk to put a crap GPU in the fusion CPU
 
but the problem is, intel graphic was suck, thats why AMD buy ati, they don't want to risk to put a crap GPU in the fusion CPU

Intel graphics suck for gaming yes, but that's not their purpose, Intel's graphics are integrated graphics meant for mainstream systems.

Plus i don't understand what you're saying in that second part, of course AMD didn't choose to use Intel's graphics, they're competitors lol.

So you're saying that AMD back in '06 said to themselves ''hey we need a GPU maker for Fusion, Intel sucks at GPU's, so lets just buy up ATI!!.'':confused:
 
Last edited:
Intel graphics suck for gaming yes, but that's not their purpose, Intel's graphics are integrated graphics meant for mainstream systems.

Plus i don't understand what you're saying in that second part, of course AMD didn't choose to use Intel's graphics, they're competitors lol.

So you're saying that AMD back in '06 said to themselves ''hey we need a GPU maker for Fusion, Intel sucks at GPU's, so lets just buy up ATI!!.'':confused:

actually amd did made some integrated gpu before ati was bought. however they end up even worse than intel's gma lines and s3 chromes and driver were more crappy than via/s3. such terrible result which forced they withdraw the igp market to nvidia and via, sis. until they finally bought ati and restart business again.

i still have that amd 462 mb (barton or thoroughbred i don't remember) with amd 762 igp but end up it couldn't even handle the game like quake 2 despite it has 256mb share ram....it would be interesting if they didn't acquired ati...
 
Despite % market share of discreet graphics has decreased a bit, actual shipments grow every year.

There's always going to be a place for discreet GPUs, putting a performance CPU or GPU each on their own die and making them yield is already hard enough, a CPU+GPU chip will never be able to compete with discrete solutions.

Given all current discrete cards with mainstream or better graphical power require their own board with their own ram and power curcuitry/supply etc, it's going to be very hard for a CPU+GPU chip to compete against separate CPU/GPU configs.

System ram will have to get a hec of a lot faster/more channels to be able to supply a GFX chip with a serious amount of bandwidth. I'm taking at least 60-80gb/s and at least 1-2gb able to be set aside to the GPU. not to mention high end CPU's tend to have 100-140w TDP's, and high end GFX exceed that, I think it would be better to have the chip's separated so they can be cooled separately.

the CPU+GPU on die combo's, IMO, will always be entry level offerings, at least in terms of the GPU, they'd have a hard time breaking into the mainstream segment of the era, let alone high end.

Can't wait for Nv and AMD's next architecture, and high end cards :)
 
There is nothing wrong with the curve. See attached raster image.

As you can see, once adjusted to follow a consistent scale, this chart is actually rather conservative. They may even surpass those deadlines, regardless of what ATI offers.

Thanks for that, I was thinking the same. If anything, Nvidia may be underestimating the demand from consumers going forward. Both Nvidia and AMD will need to produce significantly more performance if 3D is more widely adopted.

I also wouldn't be surprised to see another round of resolution increases for monitors. My phone is 800x400 and produces a beautiful picture. To achieve the same quality from my PC monitor at 22", it would need to be something like 4000x2500.

Any significant advances in either of these two areas would render Nvidia's graph as recommended minimums, rather than exaggerations.
 
Thanks for that, I was thinking the same. If anything, Nvidia may be underestimating the demand from consumers going forward. Both Nvidia and AMD will need to produce significantly more performance if 3D is more widely adopted.

I also wouldn't be surprised to see another round of resolution increases for monitors. My phone is 800x400 and produces a beautiful picture. To achieve the same quality from my PC monitor at 22", it would need to be something like 4000x2500.

Any significant advances in either of these two areas would render Nvidia's graph as recommended minimums, rather than exaggerations.

They are talking about COMPUTING performance boys... Double Precision computing to be precise, and I'm 100% sure that computing performance of Kepler will be 4x times what GF100 is. Hell, GF104's single precision shader performance per watt is already llike 1.5x-2x that of GF100... (double precision is artifially crippled on GF104 so it wouldn't be fair to compare DP)

Shader processors as long as they are included inside the same multiprocessor/cluster are cheap and don't take much die space, i.e how Ati cards have much more of them or how GF104 has 75% of SPs in 60% the die area of GF100. That's because GF104 has 3 SIMDs per Shader Multiprocessor. Next Nvidia GPU will have 4 SIMD, I'm 100% sure of that and probably that chip is NOT going to be Kepler yet, but a 40nm "refresh" of GF100, hmm 90% sure of this. Later my guess is that Kepler will have 6 SIMD units, you don't go superscalar unless you pretend to make use of it, with 2 schedulers 6 SIMD looks like the hot spot.

At the same time (or instead of) Nvidia can increase the SIMD units from 16 wide to 24 wide ones, like it happened with G80->GT200. There's so many ways that Nvidia can increase COMPUTING performance...

On the other hand ROPS, memory, etc can take up a lot of space and consumption so don't expect massive increases there.

And if you are wondering no, I don't think gaming performance/watt will be 3-4x times better, but 1.5-2x seems easy to attain considering how bad GF100 is on that department.

EDIT: With Maxwell IMO they are going to adopt full DP support on the shaders, that is SP 1:1 DP ratio and hence why there's going to be such a second performance jump again.
 
Last edited:
i think the only few part they need to improve is simd number, shader architecture and number of TMU. to me g100's 60 tmu is really bottleneck. however you can't simply add up more tmu because modern gpu's tmu are bound with SIMD cluster.because fermi had bound with too many shader under each SIMD unit which cause the limit number of TMU as NV's shader are hard to increase/tweak due to size. they have to change nv's god damn big-fat shader as well because it's hard to increase the number without the SIMD tweak which affecting the TMU&rop/bus setup. like amd's 5D shader they had been use that shader since g80 and it's time for the change. they need a major revision of their shader if they want to stay in the game. kepler will mostly an improve steroid version of g104 with new shader architcture, i hope...
 
i think the only few part they need to improve is simd number, shader architecture and number of TMU. to me g100's 60 tmu is really bottleneck. however you can't simply add up more tmu because modern gpu's tmu are bound with SIMD cluster.because fermi had bound with too many shader under each SIMD unit which cause the limit number of TMU as NV's shader are hard to increase/tweak due to size. they have to change nv's god damn big-fat shader as well because it's hard to increase the number without the SIMD tweak which affecting the TMU&rop/bus setup. like amd's 5D shader they had been use that shader since g80 and it's time for the change. they need a major revision of their shader if they want to stay in the game. kepler will mostly an improve steroid version of g104 with new shader architcture, i hope...

Shaders in Fermi are definately NOT equal to the ones in G80-to-GT200. Fermi ones are FMADD... and that's only the more remarkable change, there's probably more under the hood, irrelevant for customers.

Also GF104 has twice as many TMUs per cluster as GF100, in fact full GF104 has the same ammount as GF100, and GF104 does NOT perform better than GF100 per GFlop/s. Fermi scales linearly with it's Gflops. You can do the math, calculate GFlops of every Fermi card and then compare your results with average results from one of Wizzard's reviews: perfect match! (almost the same happend from G80 to GT200 btw, just not as precise)

In that regards Fermi architecture is perfect, since they know that if they manage to put twice as many SPs in there, they'll get twice the performance. There was a perf-per-Gflop hit from GT200 to Fermi, probably due to DP and FMADD, but the architecture is brilliant IMO and will offer a lot in the future. There's so many ways to parallelize and expand the next chips... more SPs per SIMD, more SIMDS per SM, more SMs per GPC (clusters), more clusters... and triangle and tesselation capailities will always increase with those additions, no bottlenecks.
 
Back
Top