• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Unigine Announces Valley GPU Benchmark

Did you see that draw distance?

The "pop in" that I'm seeing is with hard shadows and with flowers/ground objects. The actual draw distance of the scenes is huge.
 
The "pop in" that I'm seeing is with hard shadows and with flowers/ground objects. The actual draw distance of the scenes is huge.

Basically everything that's not clouds, a mountain, or a 2D sprite has nasty pop in
 
Yes :p

1ep7at.png
 
Demo doesn't let me run at my native 4800x900 like Heaven does (maxes out at 4096 in vertical res). The multiple monitor option is giving me trouble. I got around 32fps average at 1600x900 full screen on one monitor.

Catalyst 13.2 Beta 3 drivers used on Radeon HD 7950 with stock clocks.
 

Attachments

  • UnigineValley4096.png
    UnigineValley4096.png
    97.5 KB · Views: 407
Yeah my 6990 and 1000Mhz /1500 mem dont work at all on this bench!\

A patch will fix this
 
Is there a dedicated thread where you can submit your result? I can't seem to find it anywhere.
 
Here is the Q9550 at work...

775-Valley_zps950de194.jpg


A small bump to the cpu/gpu's -

Q9550-ValleyBasicbump_zpsd062b23f.jpg


Q9550-ValleyExtremebump_zpsa2e86294.jpg
 
Last edited by a moderator:
Instead they churn out idiotic ammounts of polygons on that mountain on the other end of the map and leave stuff close tot he viewport ugly as hell. Makes no sense and they all freakin do this. In all games and all benchmarks.

More importantly, they often use only frustum culling and no occlusion culling at all, so CPU makes unnecessary draw calls for occluded objects.
Although it is easy to integrate displacement mapping (tessellation) in LOD system, art production should be zbrush/mudbox style rather than polygon modelling - so if you don't have original object in extreme poly-count version, tessellation is not as straightforward.
Pop-ins should not exist by today's standards, I've seen LOD systems with smooth transitions (and i don't mean that pixelated dissolve shader ubisoft uses)
 
Feel free to start one...if you want to keep up with it. :p

Eheh, that's the problem :) I just wanted to know if there was some kind of thread where everyone could post their scores, I am pretty sure I've seen a couple of them in various forums but I know they require a lot of work :p
 
one of the nicest benchmarks ive seen in ages :)
 
Here's mine... Beautiful indeed... ;)

 
Here's mine. On Linux ;)
Too bad I cannot properly bench it @ 1920x1080 without triggering a [probably a bug in the driver]. *shakes fist*
So I ran it on 1600x900 and 1280x720. "Deal with it, nerds." -Fork Parker

@1280x720


@1600x900


P.S. Yesh. Beautiful. Sux I only noticed this news post "leik, half an hour ago".
 
What's up with the scores from nVidia? Even the 460s are faster than the latest 7800 cards...
 
What's up with the scores from nVidia? Even the 460s are faster than the latest 7800 cards...

Don't understand the question. The dual 7800's in post # 24 beat my 460's...all the rest of the 7800 runs are single card.
 
CPU=4.4GHz Mem=2004MHz GPU(s):1125/1375
1920x1080 one 7970:
Capture143.jpg

1920x1080 two 7970's:
Capture144.jpg


2560x1600 one 7970:
Capture145.jpg

2560x1600 two 7970's:
Capture146.jpg


All four tests report 3x GPU, but this is false.
 
Here's mine. On Linux ;)
[...]
@1280x720


@1600x900

For comparison, I re-ran it with the same settings on Windows in OpenGL and D3D11 modes.
As expected, D3D11 ran noticeably faster than OpenGL - OGL scored ~17% less on Windows and ~15.5% less on Linux compared to D3D11.
*wonders it that mostly the driver's or the benchmark's fault. Or both.*
*bets on the driver being less optimized for OGL.*

And alas, just like in some previous benchmarks I did, on Linux, OpenGL is faster than on Windows. Although, the gap is much smaller w/ 313.xx drivers - Windows driver IS catching up. (The gap was rather horrible before. Don't ask.)

The results:

@1280x720:
In OGL:

In D3D11:


@1600x900
In OGL:

In D3D11:
Link; thumb broken

P.S. I don't even know why I am actually posting those, knowing that most of You probably !care about this completely.
 
Last edited:
For comparison, I re-ran it with the same settings on Windows in OpenGL and D3D11 modes.
As expected, D3D11 ran noticeably faster than OpenGL - OGL scored ~17% less on Windows and ~15.5% less on Linux compared to D3D11.

Don't you mean DirectX 11? :roll:
 
Please dont bash it so hard, I am sure they will fix it. Its still very very pretty.
If they dont, go ahead, i will join you with pitchfork.
 
Don't you mean DirectX 11? :roll:

NO.
Read this:
[...]
/* meanwhile... */
To some of You: please, stop saying "DirectX" when it comes to graphics only. DirectX is an API that supplies a very wide range of services. Including, but not limited to: input, sound, networking, graphics, AV decoding, and so on. And the part that deals with graphics is Direct3D. Thus, when talking about the graphics portion of DirectX, please, say "Direct3D".
[sarcasm]...unless maybe Your GPU can do hardware acceleration on mouse input, for example. Which is part of DirectX, but not part of Direct3D.[/sarcasm]
(yes, I am quoting myself :banghead:)
 
Here are two runs I made for comparison, it looks like CPU clocks and thread count makes little difference in final score.

4.5 ghz HT off.
Capture688.jpg



4.8 ghz HT on.
Capture683.jpg
 
valley2013-02-1721-27-22-09_zps98d86d71.png


Sry for low res pic.

P.S. I don't even know why I am actually posting those, knowing that most of You probably !care about this completely.

Give it a couple of years, everyone will ;)

I see that the minimum FPS is slightly better on the Linux side while the max goes the other way around. More love for Pilediver on the free side?
 
Back
Top