• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AMD Released ATI Stream SDK v2.0 Beta 4 Fully OpenCL 1.1 Compliant , Reveals Hemlock?

Why there is no support for hd2000/3000?
 
Sweet, Hd 5900. Wonder if this is the counter to Nvidias G300??
 
Erm ... there's three 5700's in that list.

So what is the third one?? 5730 or 5790? 5790 would make sense and would slot in nicely in the performance gap between 5770 & 5850.

There's a GDDR3 HD5750 AFAIK. It could be that.

Finnally the got something maybe USEFULL before nvidia

http://www.techpowerup.com/index.php?104826

Nvidia's aren't betas I think and they are much more friendly with programmers, it can be developed under Visual Studio, etc. When it comes to GPGPU Nvidia is significantly ahead, not in vain they have been pushing it since the 8800 launch.
 
There's a GDDR3 HD5750 AFAIK. It could be that.



http://www.techpowerup.com/index.php?104826

Nvidia's aren't betas I think and they are much more friendly with programmers, it can be developed under Visual Studio, etc. When it comes to GPGPU Nvidia is significantly ahead, not in vain they have been pushing it since the 8800 launch.
The more important point is there is finally a Open standard for GPGPU, CUDA and Stream are both FTL.
Hardware exclusive API are stopping technology from advancing.

nVidia is ahead in slightly the GPGPU field while ATi is ahead in hardware Tessllation, after so many years we can finally get away from the "will my card support OOXX?" mess.
 
The more important point is there is finally a Open standard for GPGPU, CUDA and Stream are both FTL.
Hardware exclusive API are stopping technology from advancing.

nVidia is ahead in slightly the GPGPU field while ATi is ahead in hardware Tessllation, after so many years we can finally get away from the "will my card support OOXX?" mess.

EDIT: OpenCL, or any open standard for that matter, is good for the market but not necesarily for the advancement of technology. I'm not saying we don't need open standards, but I don't agree at all with that propietary tech must die for the good of all. Both aproaches can coexist and developers are smart enough to know which is better for them and their consumers.

EDIT2: Sorry for all the edits, but I want to make this point clear. IMO propietary tech must die eventually, but it has to die by choice of the consumers (in this case developers) and not the companies behind those technologies. For example, I know for sure that GPGPU is here and is as strong as it is right now thanks to CUDA and only CUDA. If Nvidia had stopped pushing CUDA when OpenCL was first mentioned, first, OpenCL would have never been develped that fast and second the adoption of GPGPU wouldn't be as pronounced as it is and the future and viability for such a technology would still be under the question mark. In the last 3 years a lot of developers have learned how to program in GPUs thanks to CUDA. CUDA and OpenCL (and Stream and DX compute) are very similar in how you have to program for them, so CUDA did 90% of the travel. In no way that has held technology back, on the contrary it has moved it ahead more than what it would have advanced if we would have been waiting until an open standard was developed.

CUDA and Stream are going nowhere, not in the short term. Both companies use wrappers to run OpenCL so performance is going to be slightly lower. Small developers (game developers included) will prabably use OpenCL (or DX compute) for the most part so that it can run on any hardware, but big players (think ORLN or Cray) will use CUDA/Stream, at least until GPU's ISAs mirror OpenCL in their silicon.

OpenCL/Compute will be no different from Direct3D in that the biggest part of the games are going to be coded with them, but developers slightly concerned about performance and optimization will always write some critical stuff in CUDA/Stream, just like they write some stuff in HLSL.

Finally, let's put things into perspective, right now Nvidia is far ahead of AMD when it comes to GPGPU and it will probably stay like that for almost the entire 2010. You simply can't compare the adoption rate of both solutions, the available tools to each and the functionality/strength of said tools. It's just like night and day ATM.
 
Last edited:
EDIT: OpenCL, or any open standard for that matter, is good for the market but not necesarily for the advancement of technology. I'm not saying we don't need open standards, but I don't agree at all with that propietary tech must die for the good of all. Both aproaches can coexist and developers are smart enough to know which is better for them and their consumers.

Developers may look at the incentive of a broader customer-base with open standards. If they build their software for proprietary standards, they know that a sizable customer-base is gone.
 
i think you're all forgetting about eye-finity. i bet eye-finity models, same for lowend ones, have their own SKU.
 
i think you're all forgetting about eye-finity. i bet eye-finity models, same for lowend ones, have their own SKU.

It's called HD 5870 Eyefinity6 Edition:

bta1762bn.jpg


Similar naming scheme will be used, if GPUs other than HD 5870 indeed have Eyefinity6 Editions.
 
Developers may look at the incentive of a broader customer-base with open standards. If they build their software for proprietary standards, they know that a sizable customer-base is gone.

Yes but that is in the long term. I'm talking about the times when things are changing like it has been in the past months/years. Just 3 months ago, development in OpenCL was not posible, so what was better for them and ultimately the consumer:

1 - Using a propietary technology that can make their product better than the competition and at least half their customers will be able to use.

or

2 - Not using anything and lose the opportunity of being better than the competition.

Even today, with OpenCL (almost) out, CUDA (and to a lesser extent Stream) has a much better tool set than OpenCL, so using CUDA/Stream can suppose a critical advantage, both in the power of the features created with them and the time required to developed them which can suppose you release your product 3-6 months earlier than you would with OpenCL. Once all the APIs have equally useful and powerful tools, OpenCL is the option that makes most sense, but until then it's much much etter to use the propietar tech than using none or delaying the launch of said technology 6 months. At least, that's my opinion as an enthusiast.

In any case, in change times, 1 is better for the developer and especially for the enthusiast:

1- You get the technology if you want to use it.
2- Developers have already developed that technology, so when they create the open standard based iteration they will be better at it.
3- The validity of the technology is demostrated.
 
Just 3 months ago, development in OpenCL was not possible
, since neither GPU vendors had signed/stable OpenCL drivers three months ago. I agree OpenCL is embryonic even today, but it is better for both consumers and developers since both AMD and NVIDIA have met common-ground, making it an industry-standard. The part that makes proprietary standards theoretically better is that its development benefits from extensive investment from the company behind it. But that's as far as it goes.
 
, since neither GPU vendors had signed/stable OpenCL drivers three months ago. I agree OpenCL is embryonic even today, but it is better for both consumers and developers since both AMD and NVIDIA have met common-ground, making it an industry-standard.

Read my posts again, I'm not saying OpenCL isn't the way to go. I'm just saying that abandoning CUDA/Stream is definately not the way to go. Both aproaches can coexist and is going to be the market the one that is going to decide which model stays for how long.

For comparison, I'm not going to say that the move from Glide to OpenGL/DirectX wasn't a good move in the long term, but I do know very well that while it lasted Glide was superior to both and I liked enjoying the superior eye candy and performance in those games where I could. As a consumer you had the option -where you had the option- to use each and Glide was vastly superior for a lot of time. It was the hardware (GeForce 256 to be precise) which made Glide obsolete, because the hardware was fast enough to make the combo the best option, despite the APIs being inferior at the time. As I see it, until that happens I don't see a reason for CUDA/Stream to be abandoned.

As an enthusiast, for me, IMO:

ability to have access to a new feature/technology >>>>>>>>>> (greater than) ability to run that feature on any hadware but at a later time
 
Last edited:
What alot of people forget is that Nvidia are entering an entirely new market. This is a bit of a gamble.

The HPC territory may be new for both graphic vendors, but not for AMD. Alot of the world's super computers have Opertons inside, they have already established themselves as a known and trusted brand. Currently Ati=AMD, so they'll have a much easier time entering the market - after all they already know all the clients.
 
i cant wait for stream apps to take off
 
AMD released its fourth beta of the ATI Stream SDK version 2.0, that provides the first complete OpenCL development platform. The release is certified to be fully compliant with OpenCL 1.0 by, the Khronos Group. A wide range of AMD GPUs as well as any x86 multi-core CPU supporting SSE3 instruction set are supported. For more information on this release, and to download, visit this page.

An interesting discovery by TechConnect Magazine shows that in these OpenCL drivers, are identifiers for a yet to be announced "Radeon HD 5900 Series", with the device IDs 689C and 689D, both marked under "Evergreen", like other members of the Evergreen family, such as Radeon HD 5700 and Radeon HD 5800 series. The most plausible explanation for "Radeon HD 5900 Series" could be that it is the name of the graphics cards based on the Hemlock GPU architecture, which pairs two Cypress GPUs onto one board. The driver also gives away device IDs, if not product names of GPUs based on the upcoming entry-level Redwood and Cedar GPUs.

[url]http://www.techpowerup.com/img/09-10-14/23a_thm.jpg[/URL]

Source: TechConnect Magazine

These 5900 entries are not taken from the ATi OpenCL Beta Driver v2.0 beta4 they are no where to be found inside the driver package. Not in any of the INF files nor are they named in any of the MSI files.

I wonder where they got this from.
 
Well, dunno what I see, but as far it goes to OpenCL 1.0 Overview all I see is CPU being bottleneck to GPU from now on.
 
Last edited:
Well, dunno what I see, but as far it goes to OpenCL 1.0 Overview all I see is CPU being bottleneck to GPU from now on.

There are many potential bottlenecks when doing GPU computing since all memory management must be explicit (page 12 "You must move data from host -> global -> local AND BACK") http://www.khronos.org/developers/library/overview/opencl_overview.pdf

If one's algorithm has to manipulate a lot of data, but performs a small amount of math on that data, GPU versions of an algorithm will be bottlenecked by the memory manipulations. But it can still be faster. For example these two bioinformatics papers on implementing Smith-Waterman sequence alignment searches using CUDA/GPU and using SSE/SIMD.

CUDA compatible GPU cards as efficient hardware accelerators for Smith-Waterman sequence alignment. Svetlin A Manavski and Giorgio Valle. BMC Bioinformatics 2008, 9(Suppl 2):S10

Striped Smith–Waterman speeds database searches six times over other SIMD implementations. Michael Farrar. Bioinformatics 2007 23(2):156-161;

Short version (figure4 from Svetlin/Giorgio), for short sequences the CUDA version on a single 8800GTX will run faster than the SSE version on a 2.4 GHz Intel Q6600. For medium to long sequences the SSE version is faster than a single 8800GTX but a dual 8800GTX can run 1.6x faster. Best case the dual GTX runs 3x faster than the SSE.

This is why Intel (AVX) and AMD (Bulldozer SSE5) as also expanding SSE vector units inside the CPU in the next generation. Not all software/algorithms are going to see a 30x speedup on the GPU compared to SSE/vectors. Nvidia is going to have to fight in the HPC area (it is not a slam dunk win). There are definitely some algorithms (video encoding, game physics/AI) which will benefit from the GPU.
edit: other algorithms which work well on GPUs, molecular simulations, fluid dynamics, geophysics, nuclear simulations

edit2: this is why OpenCL is so good. It is for heterogeneous parallel computing (CPU SIMD, GPU, DSP, ....). I know this is different in the consumer market, but in research, algorithm development and adoption is a slow process, so having a standard (like OpenCL) means the code will work on GPUs today or Bulldozer CPUs tomorrow and some next-gen CPU with massive SIMD vector units in the future. The hard part is switching the programming model/mind from single-threaded thinking to multi-threaded thinking to massively-threaded thinking.
 
Last edited:
oke , but i have a hd4870x2 + a phenom 9850 @ 3.1 ghz oc,, i cant get openCL suppoert for shit ,
downloaded the newest 10.1 CCC and i have ati-stream-sdk-v2.0-xp64 but i clicked win7 mabye i shoulf check into that, because it was a 70 MB download
 
Back
Top