• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Unlimited Detail Technology

Status
Not open for further replies.
and if anyone actually paid attention to the Heaven bench Tessellation can actually provide that right now just do to consoles and limited high end gpus in the PC segment we all get to wait fact is Tessellation can be applied to entire scenes and with proper methods would eliminate the Level Of Detail changes we notice in games now. But with anything in the Tech world it takes time to advance and right now this is way to far out there to be viable any time soon.

Simple fact is we dont need to have tesselation applied like Nvidia is push or AMD fact is

every time tesselation is applied geometry increases 4x so 1mill polygons becomes 4mill 4mill = 16mill so on and so on image if you scale those tessellation lvls to distances from a character in a game but better yet things like buildings dont really need tesselation sure bricks might look nice but if your against the wall and can see the change dynamic tessellation can fix that. It all comes down to how tech is applied the detail you want to see is already available sadly like most of us here you have to wait for the rest of the world to catch up. This Point Cloud data is grasping at straws when its not really needed

examples most game texture files work with mip maps to conserve space at greater distances.

the same applies to normal and bump maps color maps tessellation can be done the same way it just hasnt

Good Example is below it could be applied that when say in Oblivion you talk to a person and there face is close up you get detail lvl 3 but say your a few feet away you get lvl 2 as you get further away it drops down to the next lvl so on and so on we dont really need Point Cloud Data to get the unlimited detail tech. Its already at your finger tips were limited in other ways then how many polygons a gpu can render etc etc
imageview.php


Dont get me wrong im all for new tech just not stuff thats not viable for the purpose your trying to apply it to
http://www.xbitlabs.com/articles/video/display/hardware-tesselation_3.html

an example below from the Link i posted above that change in poly count has only 1 fps penalty on a GTX 480 the big issue is there still using normal maps instead of a displacement map
normal = faked it gives the illusion that something is more then it is aka bricks in a wall wrinkles in skin etc
displacement = real it displaces the geometry the more polygons there are the better the displacment
examples of displacment used in games = Uncharted 2 they use animated displacment maps to make believeable wrinkles in Drakes forehead as he talks or reacts etc etc this on old 7800 series hardware :roll:

As i said that which is you want is already available your just stuck waiting for the 95% of the rest of the world that arent up to speed yet or have the hardware capable of it
AvP_Tess_Off.jpg

AvP_Tess_On.jpg
 
Last edited:
Yes, it does indeed take time to advance, thus we (me and those that agree with me) want to have AMD (and/or other significant players in the graphics market) check the viability of this and accelerate development and facilitate standardisation if it is in fact viable.
I'm by no means a pro at this in any means, but as far as I see, the difference between what you're talking about and UD is the level of system load vs detail provided.

Isn't all the detail in the universe enabled by atoms? Then logically, wouldn't it follow that a point cloud system would make the closest virtual simulation of reality? And if a way is found, as is claimed, to process only visible points to fill a given number of pixels and thus concentrate processing power where it's needed, wouldn't that be the most efficient manner of rendering the graphics?

Guys, let's be serious here; this tech is very attention catching. Let's push through this petition and know once and for all if it's a revolution or a dud. Don't let your own personal doubts/misunderstanding of it colour your willingness to have it proven or disproved. Nobody can agree to funding it without a proper demo, true, but that's not what we want here, we want it simply checked out by industry leaders, that's all.

By the way crazy, no offense intended, but please could you use a bit more punctuation in your posts? I'm finding it difficult to understand you without it.
 
lol everyone finds me hard to understand i only use punctuation in reviews :roll:

I realize yes in theory it is the closest in terms of reality but since it dosent change anything we do in the process leading up to using UDT it means nothing changes..

Models still need to be created animated Texture files still have to be painted in this situation it makes no difference

example only so much can be displayed via a pixel on a screen ... hold on i got an image somewhere showing what i mean...
Low Res
texture%20test2.jpg

male%20texture%20test%202.jpg


High Res
testrun%20sculpt1.jpg

testrun%20sculpt1%20back.jpg


Basically the difference between those models is 6million polygons High res vs 10k or less on the low res
by subividing 1 time i can eliminate the jaggie edges or Nickeling as its called in 3d meaning that your point cloud data might = perfect rendition of reality but your human eye can make out every individual scratch or variation. The fact is the world of 3d even if you could make it perfect in terms of how it appears you wouldnt need Unlimited detail to do so another great example would be Gran Turismo 5 look at the Cars vs there Real life counter parts. Besides better lighting and reflection which could be easily done on todays gpus not the outdated 7800gtx your already at the lvl of detail your talking about. The human eye itself would be able to see the difference.


Gran Turismo 5
Bern-MarketStreet_2.jpg

Bern-MarketStreet_1.jpg


Real life
1972FordMustangMach1FastbackSide.jpg

3793550682_e9485a6939.jpg
 
Last edited:
I think you might be going off at a tangent here. The focus of this technology is not so much point cloud data, as much as it is the detail to load ratio. The point cloud data is something that works hand in hand with it to produce the level of detail, but is not the essence of what is being focused on.
What the tech is promising is to hugely increase graphics detail over conventional graphics currently, whilst having minimal impact in terms of load over the current methods used. Think of it as AMD's MLAA, it's kinda like something for nothing.
 
my point is if you look at the images above in terms of it all it comes down to optimizations

The PS3 is roughly if optimizations are the same nothing more then say a core 2 duo at 2.4ghz with a 7800gtx 256mb yet it gets THAT close in terms of hardware that almost whats accomplishable on an AMD Zacate or Intel i3 and in a sub 20w package. Its not really a tanget more a fact even if this tech comes out it will need massive gpus to render the highly parrallel computations for it something a CPU just isnt good at and even if it was managable on a cpu then what do you run the Artificial intelligence on and have do all the background tasks needed to make everything function as it should. Dont get me wrong it is a viable tech but only in offline rendering modes and in case by case situations.

to be blunt making stuff prettier dosent make the Ai smarter or able to counter to you most games the AI just gets bonuses that if it were person vs person would be considered cheats new rendering modes better graphics are one thing but to be honest i dont think graphics are really lacking whats lacking is immersion in terms of games. The games now play themselves ala Black Ops where the person does nothing as the AI dosent really think it just has This situation happened lets respond with action B. in racing games the AI can do what a person cannot etc etc. We have many other areas that need improvement not just visual fidelity as it has been figured already that the gpu power needed to run holograms should be viable within 10 years not for consumers no but the power at are finger tips will be there by that time. Thus a tech like this just dosent really hold water when looking at that senario and or goal. Everything is a full package and has to be balanced. Graphics have reached the point were 5-10% better visuals require 50% more rendering power but everything else has stayed the same.
 
Last edited:
my point is if you look at the images above in terms of it all it comes down to optimizations

The PS3 is roughly if optimizations are the same nothing more then say a core 2 duo at 2.4ghz with a 7800gtx 256mb yet it gets THAT close in terms of hardware that almost whats accomplishable on an AMD Zacate or Intel i3 and in a sub 20w package. Its not really a tanget more a fact even if this tech comes out it will need massive gpus to render the highly parrallel computations for it something a CPU just isnt good at and even if it was managable on a cpu then what do you run the Artificial intelligence on and have do all the background tasks needed to make everything function as it should. Dont get me wrong it is a viable tech but only in offline rendering modes and in case by case situations.

to be blunt making stuff prettier dosent make the Ai smarter or able to counter to you most games the AI just gets bonuses that if it were person vs person would be considered cheats new rendering modes better graphics are one thing but to be honest i dont think graphics are really lacking whats lacking is immersion in terms of games. The games no play themselves ala Black Ops in racing games the AI can do what a person cannot etc etc. We have many other areas that need improvement not just visual fidelity as it has been figured already that the gpu power needed to run holograms should be viable within 10 years no for consumers no but the power at are finger tips will be there by that time. Thus a tech like this just dosent really hold water when looking at that senario and or goal.

But therein lies your misconception. You believe this will increase load, but the claim is that it will hugely reduce it whilst increasing detail. To have this all work on the GPU is why you would want AMD there to work alongside the developers of UD, to optimise it for the GPUs and leave the CPUs free to compute AI, etc. And if the load is simply reduced on the GPU (compared to current methods) and the graphics are boosted to an indisputable level, maybe the remaining processing power of the GPU can be used for other tasks that would otherwise be factored out by priority due to how much of the GPU has to be devoted to rendering the graphics. Thus exactly just what you're asking for would happen, focus on other areas that are being overlooked for the sake to trying to find the balance between load and detail, would finally get the attention and working room they need.

The goal is to reduce load and increase output, ie, optimise. That being the case, it's universally applicable, especially real-time rendering.
 
we would need it to reduce load at extreme resolutions were talking 4320p at those resolutions sure but when will we move to that tech most of the world is still standard def at 720x480 with some at 1280x720 very few sources today at actually 1920x1080 let alone anything higher. As i said GPU wise we should have the ability to produce something like the holodeck in 10-15 years. This tech dosent really get us there.

But then theres the itchy physics issue can a gpu render this method at ultra high resolutions with close to real life physics interactions with objects and allow actual water simulation probably not. Its an interesting concept but i just dont see it happening. We are already at a stage where are eyes can see a huge difference in quality once we hit ray tracing there wont be much left needed in terms of getting close to reality except mass storage at faster speeds. which are already in the works

holographic storage is already usable in the non consumer space at 10 terabytes per square inch. This tech needed to have developer support 3-4 years ago to have made an impact it might only be my opinion but this is to little to late. Its got a niche in 3d for a final step in terms of the pipeline nothing more.

but again Point cloud data still needs a mesh of an extremely high resolution to offer that UDT so you still need an artist to make a 1billion polygon model texture it animate it etc etc so it can be rendered in a situation with UDT thats just no really possible not to mention you would need a game engine from scratch built on a non existant software platform that has enough industry support to work... example direct X wont run this Open GL wont either in terms of a real time situation so what do you base it on? Theres alot more here then meets the eye in terms of hurdles to overcome something i highly doubt will be possible in a time frame that actually matters.

This tech is to me alot like the game Project offset that Intel bought. Interesting concept has alot of potential but will never see the light of day.
 
I don't know if your 6970s never suffer lag, but my 5850 regularly loses sync with my monitor (<60fps) which is at 1920x1200, and if there's the potential for something to come along that could make me able to zoom in enough to see the colour of the eyes one of my units (in an RTS), all the while never letting framerates drop, I would want that.
Nevermind that, if I could get even 10x or whatever extra amount of detail you say is enough, all the while having the load on my system/gpu reduced from even 60% to 10% for example, thus reducing my power use, heat & noise; I'd jump at that too.
Or what about considering 3D? Sure my card currently manages to maintain vsync in most games, what about on a 3D capable monitor that's at 120Hz or 240Hz? If I could have a tech that would allow my card to easily maintain vsync with that and look better, all the while putting a lower load than I had previously at 60fps, I don't think that's something I would even think twice about.

That is what UD is promising and possibly more!
 
That is what UD is promising and possibly more!

no, its not.



UD is promising one small thing under specific circumstances, without giving away what those circumstances really are. the rest is just pure speculation.
 
This tech dosent really get us there.

Would if it's real man, you could look look at a book with tiny writing, to small to read in normal circumstances or texture limited these days etc.

But with this it would be able to be held close to the face and read normally : ]

All sorts of stuff like that.

Also everything you could see would be rendered so draw distance would be FUCKING EPIC :laugh:
 
no, its not.



UD is promising one small thing under specific circumstances, without giving away what those circumstances really are. the rest is just pure speculation.

Not so, by proxy it is giving rise to such possibilities and possibly more. Current load from rendering is imposing a lot of limitations and I think we all know that.
 
Not so, by proxy it is giving rise to such possibilities and possibly more. Current load from rendering is imposing a lot of limitations and I think we all know that.

but we dont know jack shit about this.


sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)

how do we know there isnt limitations in this, that are even worse than current methods? we can see the same item from 50 angles, but can it show 50 items as well as current methods? how do we know its actually superior to what we have now?
 
but we dont know jack shit about this.


sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)

how do we know there isnt limitations in this, that are even worse than current methods? we can see the same item from 50 angles, but can it show 50 items as well as current methods? how do we know its actually superior to what we have now?


I've seen animations using the tech : ]

Albiet shitty ones, but there is more previews if you hunt about.
 
but we dont know jack shit about this.


sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)

how do we know there isnt limitations in this, that are even worse than current methods? we can see the same item from 50 angles, but can it show 50 items as well as current methods? how do we know its actually superior to what we have now?

By petitioning AMD and/or others to look into it and work with UD to uncover the possibilities and identify the limitations.:toast:
Sign our petition.:)

EDIT: here's an animation preview:
http://www.youtube.com/watch?v=cF8A4bsfKH8
 
Last edited:
Well apprently we only need to wait 12-16 months to find out if this is real.

Also according to their last press release they already have investors (so not asking for more moneys) so at-least the snake oil idea is gone. As they're not trying to con anyone when they're asking for nothing.
 
Also according to their last press release they already have investors (so not asking for more moneys) so at-least the snake oil idea is gone. As they're not trying to con anyone when they're asking for nothing.

That's what I have been repeating over and over, but nobody seems to listen.
 
I must be the only person in the world still on 56k. FFS.
 
Being in it's infancy naturally there will be limitations. Someone has to put forth the time and money to give this a chance. Complaining that the baby you just pooped out can't walk, talk, and feed itself is pointless.
 
So, if I get this correctly this tech basically skips the 3D step and goes straight to giving you a 2D image representing the 3D objects, based on angle and position, that you can actually see from that angle.
If this works like a 'search engine', when does indexing the search results take place? During development/compilation, so that the end user only has to read the point maps?

I'd like to see a demo with:
- Not 100x the same object, but 100 different objects. I reckon an Object Oriented approach is being used, which in case of replication, decreases load for multiple instances of the same object drastically.
- Textures. I wonder how colors and textures are being applied to such an approach.

If these two cases are viable, I'd support this tech.
 
I very much prefer tessellation because it's a technology that is here and now and is proven to work. Game studios should just learn to apply it properly and stop worrying about backwards compatibility. I'm pretty sure there's enough Dx11 hardware to go around for studios to stop worrying about backwards compatibility.
 
People are calling them out as frauds, because they're claiming to handle an infinite amount of detail/data. As we all know, nothing in this universe can do that, so the claim is bollocks.

If they'd simply billed it as a new, hyper-efficient way of dealing with a huge volume of data giving say, a 100-fold improvement in rendering speed, then I'd buy it and look forward to the official tech demos and description of the technology.

But they didn't. :slap:
 
@Thrackan: You may have just about hit the nail on the head with that analysis.
I think the idea of it being a search-engine-like technology is limited to the idea of it finding what is supposed to show up in actual pixels on your screen and presents them & that the replication was just the limited artistic abilities of the designers as opposed to a necessity for replication. Of course i can't be sure though, it was just my understanding of it.

@Everyone else opposing:
I can't for the life of me understand the internal struggle that one would have with petitioning for something to be seen for viability when it has so much potential and no economic consequence. I've heard quite a few counter arguments now but have thus far not heard any (accurate) reasoning that validates skepticism that says it is a risky venture and not worth petitioning for. Seems the trend is to be stubborn for the sake of it. Let's go through some of the reasons:

- will take long to become relevant = all the more reason to see if it's viable sooner = petition
- methods that produce high detail available = current methods are load intensive = petition
- financial risk for AMD = UD doesn't need sponsorship, just recognition & if AMD works hand-in hand, it can be optimised for GPU, AMD architecture, etc, thus keeping AMD relevant (unlike if they finish development independently and drop the bomb on everyone, leaving bankrupt casualties) = petition
- limitations of tech unknown = all the more reason to see if it's viable sooner = petition
- want to see demo 1st = AMD working with UD would accelerate development = petition
- it's up to industry bigshots = point of petition is to show customer interest/trend (which they take polls & spend money to discover normally) = petition
- undecided = if it's development is completed, one can make up his/her mind = petition
- no proof of not being a scam = All the more reason for AMD to check & right it off it it is = petition
- too good to be true = why not let AMD find out if that's trully the case? = petition

Why the inclination to fight it so hard when it'll benefit you in the end if it works?:confused:
If this becomes huge one day, wouldn't you like to know that you were one of the voices that made it happen?
Each of your signatures count, and together we can be heard and achieve somthing.;)
 
Even if this technology was true (and it looks really stupid, because the pyramids with animals looked like they were made in Play Station 1 :P) I could not know what they are really talking about because this technology is only for people who use it in order to catch your attention and if you are stupid, they will catch yours for sure.
IF IT WAS INFINITE, THAT WOULD BE REAL LIFE GRAPHICS.
 
@inferKNOX

Thanks for the reminder about the petition, which I'd forgotten about. I'd be happy to sign it, but I want to check out the full picture before doing so. My post 147 before yours, explains where I'm coming from. If this is fraud - and truly infinite detail certainly is - then there's no way that this muggins is gonna sign a petition, even if it costs nothing.
 
Status
Not open for further replies.
Back
Top