• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Dramatically Simplifies Parallel Programming With CUDA 6

On a non-trolling, mature, civil point of view, I have to agree with you. NVidia's moves for the past month has been nothing but desperation. GTX 780 Ti is proof of that, G-Sync is another example. After R9-290x toppled their former, over-priced king, the GTX Titan, NVidia, looks like they went into panic mode. All the consumers QQ about competition not being around, not driving prices down, smacked NVidia on it's butt because they didn't take what has been happening with AMD seriously. They scrapped Titan Ultra and Lite. Currently researching on Stacked Ram aka Volta. Something AMD has already done in the past and refined... Not feeling confident about AMD Mantle even though NVidia user can utilize it.

I have to agree that it seems like a copy of hUMA, but hUMA is for APUs. An example would be like the Intel Haswell and Haswell-e SoC. To say NVidia is copying it, would require NVidia to produce APU like AMD to make that statement more valid. Intel hasn't copied hUMA, and they produce APUs of their own, but don't call them APUs like AMD does. For the most part, they are the same thing... NVidia GPUs utilizing System Memory besides dedicated GPU Ram, doesn't seem innovative which is something NVidia has been synonymous for in a while. Now if the rumors were true. NVidia may eventually venture into the server market beside continuing it's push into the tablet/cellphone market, they will eventually have a copy-cat of hUMA. NVidia will be in competition with Intel again, besides AMD, in that market. This is another NVidia desperate move to produce more revenue returns...

Right now, G-Sync has Tegra4 chips on them. Mainly to help NVidia liquidate their leftover inventory since SHIELD and tablets containing those chips, aren't selling like hot-cakes. I suspect their 4thQ revenue reviews will start to shows signs of decline... I strongly feel that GTX 780 Ti or GTX 780 Titan, isn't selling highly either for 7% more Cuda Cores, marginally improved Core Frequencies, and D3D11.2 / Direcompute Full Support for another whooping $699.99 for a single unit. Especially when GTX Titan is still at $1000.00, and from other 3rd Party reviews, GTX 780 Ti in SLI has a tendency to drop frames on certain titles. Brain-dead NVidia fanboys won't admit it, but the kick to their privates after purchasing GTX Titan and 780--I bet it hurts. Pride + Stupidity = epic fails...

AMD right now reign supreme in multi-GPU solutions, and cost efficiency. CrossfireX through the PCIe Bus seem to have fixed AMD's issues with multi-GPU computing. Since AMD won the console wars with NVidia--sucks that NVidia doesn't produce APUs of their own, AMD in a way, has an ability to call the shots on up-coming console games for the next 10 years. Star Citizens, a highly anticipated MMO space-shooter, will be optimized for AMD GPUs with AMD Mantle supporting it... I suspect EQN will be optimized for AMD as well besides the idea that they will be using MS Havoc. Elder Scrolls Online might be another title that's optimized for AMD GPUs, if the rumors I heard about it are true...

More layoffs coming to AMD

Just quick reply for games:

EverQuest Next will use multiple PhysX SDK and APEX features, as well as support GPU physics acceleration.

citizen.jpg


they will admit nothing !!! :banghead:

for them all hail :respect: NVIDIA

:roll:

You maybe want to explain this.

AMD fanboy burns AMG GPU pretends it's 780Ti
 
Many people don't realize how much CUDA is getting radicated into professional software, I suggest you to take a look at CUDA developer zone before crapping into threads with consequent humiliation by showing how much clueless you are.

CUDA can be used and shown, AMDs implementations are just on paper so I don't get how people can draw conclusions lol.

cuda works fine i agree but your assertion that Amd's are only on paper is just ridiculous, have you not heard of open cl , Ive folded on amd gpu's for years and they are also fully opengl cl and direct compute compatible.
 
cuda works fine i agree but your assertion that Amd's are only on paper is just ridiculous, have you not heard of open cl , Ive folded on amd gpu's for years and they are also fully opengl cl and direct compute compatible.

You are talking like OpenCL is something AMD brought to the table.
 
You are talking like OpenCL is something AMD brought to the table.

how so, i said Amd is compatible, exactly.

and implied it was useable to the same ends as cuda but went no where near what your saying, open< CL hello.
 
i m does not surprise me that to-morrow Mantle could be the inspiration of Microsoft for make Direcx12, as well as the tesselation was to DirecX11 :D



i know my inglish is a shit :pimp:
 
Last edited:
Just quick reply for games:
"EverQuest Next will use multiple PhysX SDK and APEX features, as well as support GPU physics acceleration. "

http://s15.postimg.org/w3yli1v6j/citizen.jpg

Still waiting for you to provide the NVidia PowerPoint Slide on EQN. Last I heard, EQN is using Microsoft Havoc. No point in having PhysX and APEX. It's proprietary, in some games it creates more problems just to use it (i.e. Planetside 2), and it drops performance.

CynicalCyanide, a founding VIP member of RIS, said the following quote listed on the RIS Forums titled "GUIDE TO BUYING PC GAMING HARDWARE," under GPU NOTES.

"#3 GPU Compute: The AMD cards slaughter Nvidia Kepler cards for most GPU computing. This probably doesn’t matter to you, but if you use your GPU for OpenCL, Bitcoin mining etc, AMD is the clear winner here." Cyanide.

Source:
CynicalCyanide, GUIDE TO BUYING PC GAMING HARDWARE, Roberts Space Industries, Jan 27 2013, Nov 11 2013, https://forums.robertsspaceindustries.com/discussion/15249/guide-to-buying-pc-gaming-hardware.

Point is this. He doesn't "directly" state that Star Citizens is optimized for AMD Graphic Card-users, but he indicates that AMD, paraphrasing his words, performs better than NVidia Kepler in the computing department. This is a discussion on what's the best, ideal hardware with respect to the upcoming game. Assuming this is the universal consensus amongst all members of RIS, I think it's safe to say that they will probably lean more towards AMD, but will take the needs of NVidia users into consideration when the MMO goes live. PhysX will do absolutely nothing for the game. So NVidia users will only benefit from NVidia products, with respect to the game, just on a GPU Computing scenario. It won't be nothing more, and G-Sync may help with performance, but I'm not putting a lot of faith in that. On the other hand, Cyanide does state that NVidia would be more ideal for Surround. I won't disagree with that. I can see Star Citizens being a big name MMO if they push massive flight-battles with massive battleships and carriers, but multiple players would need to control them. It's taking MMO extremes to the next level.
 
Still waiting for you to provide the NVidia PowerPoint Slide on EQN. Last I heard, EQN is using Microsoft Havoc. No point in having PhysX and APEX. It's proprietary, in some games it creates more problems just to use it (i.e. Planetside 2), and it drops performance.

CynicalCyanide, a founding VIP member of RIS, said the following quote listed on the RIS Forums titled "GUIDE TO BUYING PC GAMING HARDWARE," under GPU NOTES.

"#3 GPU Compute: The AMD cards slaughter Nvidia Kepler cards for most GPU computing. This probably doesn’t matter to you, but if you use your GPU for OpenCL, Bitcoin mining etc, AMD is the clear winner here." Cyanide.

Source:
CynicalCyanide, GUIDE TO BUYING PC GAMING HARDWARE, Roberts Space Industries, Jan 27 2013, Nov 11 2013, https://forums.robertsspaceindustries.com/discussion/15249/guide-to-buying-pc-gaming-hardware.

Point is this. He doesn't "directly" state that Star Citizens is optimized for AMD Graphic Card-users, but he indicates that AMD, paraphrasing his words, performs better than NVidia Kepler in the computing department. This is a discussion on what's the best, ideal hardware with respect to the upcoming game. Assuming this is the universal consensus amongst all members of RIS, I think it's safe to say that they will probably lean more towards AMD, but will take the needs of NVidia users into consideration when the MMO goes live. PhysX will do absolutely nothing for the game. So NVidia users will only benefit from NVidia products, with respect to the game, just on a GPU Computing scenario. It won't be nothing more, and G-Sync may help with performance, but I'm not putting a lot of faith in that. On the other hand, Cyanide does state that NVidia would be more ideal for Surround. I won't disagree with that. I can see Star Citizens being a big name MMO if they push massive flight-battles with massive battleships and carriers, but multiple players would need to control them. It's taking MMO extremes to the next level.

Capture.png
 
OpenCL > CUDA.

This would make sense if CUDA wasn't portable to OpenCL, but then again, it is. If anything, CUDA = OpenCL + OpenCL "Extensions", just like how OpenGL Extensions is.

The only problem is that AMD cards work better with float4 which is not the best case in many GPGPU applications.

This. People don't know how similar CUDA and OpenCL are, and how portable. The biggest mindfuck is difference in terminology:

C for CUDA terminology OpenCL terminology
Thread Work-item
Thread block Work-group
Global memory Global memory
Constant memory Constant memory
Shared memory Local memory
Local memory Private memory

Porting your CUDA applications to OpenCL™ is often simply a matter of finding the equivalent syntax for various keywords and built-in functions in your kernel. You also need to convert your runtime API calls to the equivalent calls in OpenCL™

That's it, almost automatic converter could be done.
 
Another reason not to hate on CUDA frankly. Presumably nVidia actually offer ongoing support into the bargain too.
 
The only reason to hate CUDA is because of it being proprietary, other than that it's brilliant.

Way easier than OpenCL imho.
 
The reason it's brilliant is because it is proprietary, companies are nothing without their IP, although it seems to be accepted that they should share everything on this forum.

Proprietary is used like some dirty buzz word people like to sling around as it suits them, people need to get real and understand how big business operates.

Of course, if any genius here has some great money making ideas, feel free to share them with me first, I'm all for piggy backing and taking advantage of others hard work.
 
Dude, proprietary hurts the collectivity because in this particular situation it helps a company in establishing a monopoly.

I could NOT care less about nvidias business, what I care about is a healthy market with good competition.

So effin yes, proprietary is a con in this case.
 
Last edited:
Dude, proprietary hurts the collectivity because in this particular situation it helps a company in establishing a monopoly.

I could care less about nvidias business, what I care about is a healthy market with good competition.

So effin yes, proprietary is a con in this case.

Luckily for you AMD have put their weight behind OpenCL, which probably explains why it lags behind.

I bet you can't sleep at night with all these evil monopolies out there.

Again, any great money making ideas, let me know.
 
Luckily for you AMD have put their weight behind OpenCL, which probably explains why it lags behind.

I bet you can't sleep at night with all these evil monopolies out there.

Again, any great money making ideas, let me know.

Chances are if you didn't come up with the idea to begin with you wouldn't be able to make a profit with it even spoon fed and considering how you are grasping the core of this conversation makes my point.

Don't worry about my sleeping schedule, stay classy instead.
 
The only reason to hate CUDA is because of it being proprietary, other than that it's brilliant.
Catch-22 situation. People hate proprietary, but proprietary tech moves faster from the development to implementation stage. It has better funding, a more cohesive ecosystem-application/hardware/utilities/marketing- and organized cadence between the facets
Open source by it's very definition has a protracted review and ratification timeline as with anything "designed by committee".

OpenCL would be a prime example. How long between revisions...a year and a half between 1.0 and 1.1 and another year and a half between 1.1 and 1.2? How quickly to evolve from a concept to widespread uptake...five years plus?
Without CUDA, where would we be with GPGPU ? GPU based parallelized computing may be chic in 2013, but that wouldn't have helped Nvidia stay solvent via the prosumer/ws/hpc markets back when the G80 was introduced...and without the revenue from the pro markets being ploughed back into R&D, it isn't much of a stretch to think that without Nvidia creating a market, AMD might be the only show in town for all intents and purposes.
We would likely be closer to a monopoly situation (with AMD's discrete graphics) had Nvidia not poured cash into CUDA in 2004.
 
hmm, from what ive read, it still copies from system to gpu or vice versa in the memory space yes? it just removes the manual copying and was automated?
 
hmm, from what ive read, it still copies from system to gpu or vice versa in the memory space yes? it just removes the manual copying and was automated?

Yes, memory buffer is automatically copied over the PCI-E, this only simplifies coding.
 

You're putting your faith and trust in Wikipedia. LOL Epic!

I wouldn't put a lot of faith into that. Rumors I've heard is that EQN will be on a DX11.0 API, and it won't be Client-Based like PS2. PS2 is currently DX9.0 and client based. Ultra Mode almost pushes 10 GBs System Ram. In addition, it's debatable to say that NVidia hasn't put a lot of attention on it since it is "one" of their optimized games, but it looks a lot purty-er and better on AMD Cards. The use of any SweetFX injectors could get you banned from PS2...

Let's say it's true, and it probably is true (Forgelite Engine:D3D9.0, PhysX, NVidia Optimized, "probably" client-based)... They do use NVidia for Everquest Next. That still doesn't speak highly about ForgeLite or EQN. Why. Well for instance, take a look at PS2's OMFG Patch. OMFG =/ Oh My F***en Gawd Patch. OMFG = Oh Make Faster Game - Patch. A lot of NVidia users are still having issues, and the PS2 Devs have disabled PHYSX Particle Effects so they can work out the issues "further." Here's my point. Yet again, if EQN follows the same or similar trend, are you going to argue to me, in the first year of EQN, that it won't be proportionate in some way to PS2's derps and fails? Answer no and your trolling, but if you answer yes, some points might be valid. Other points could up for debate. That's the likely outcome. I am doubting AMD users will have as much headaches as NVidia users if that's the case. It's a scenario that's plausible.

I'll admit error on my end about EQN. There's no shame in that. I think it's un-necessary to debate any further whether you were wrong about Star Citizens. Still. Like a dumb Republican who can't answer question directly, I'm still waiting, from you, for that NVidia Power Point Slide that says EQN will be optimized on their products.
 
Nvidia is far from desperate I'd say just from a bare glance at them; they're seemingly not caring even in the slightest for the most part. Saying CUDA is useless due to not being open-source I find ridiculous considering how many companies and compute setups feature it and AMD have barely entered the compute wscrne. as of yet even honestly.

Also don't get why people are saying they're desperately cheapening thenselves after the Titan; it remains the same price and always will due to its unique position as an entry level compute card from Nvidia. The 780 near enough matched the 290X and the Ti was hardly desperation, I'd think it more of getting rid of silicon that didn't make the cut for a K40 and making a small profit perhaps in their proportionslly minor gaming branch... the near instantaneous launch shows it was in the works well before 290X release too... but sigh, not sure why I bother :P
 
Back
Top