• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Unlimited Detail Technology

Status
Not open for further replies.
@qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.

@Aleksander Dishnica: Either I'm not understanding you at all, or you're saying we're stupid. I really hope it isn't the latter because that would be very narrow minded, and against the rules.
 
@qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.

@Aleksander Dishnica: Either I'm not understanding you at all, or you're saying we're stupid. I really hope it isn't the latter because that would be very narrow minded, and against the rules.

The reason that I don't want to sign is that, until proven otherwise with a real time demo, I think this is just a hoax and I'm not going to support something that s not what they say it really is. Period.

I can recognise the serious potential benefits of teleportation and time travel, but I would never believe/support someone who said it had found the way to achieve them, if they had not demostrated anything. The whole point of this is that what they say they are doing (searching the required points from an infinite/huge ammount of point cloud data in real time) is or has been imposible until now, the author himself admits that searching the required points is/was imposible, except they have managed to do it. The problem is that they have not demostrated they can do it and they have not even hinted at what method is used. Not even an overview has been made and the only thing mentioned is Google Search. The Google example is stupid, because Google works thanks to distributed computing (the search is made on thousands if not millions of PCs following a hierarchy), something that the UDT engine can't have access to.

Until they give something tangible it's a definite no from me. The reason for this is that this not not the only engine promising this kind of advancement (it is however unique on the scale which again works against them for being unrealistic), there are hundreds of similar claims (should we pay attention to all of them?), both with point clouds or voxels and none of them have been proven viable except for procedurally generated maps or using fractals*, none of which would be usable for modern games.

If I had to support one 3rd party exotic engine I would rather choose Atomontage over this any day. They are obtaining similar results, but they are explaining everything, they have, know and telll the cons and don't rely in a "secret magic algorithm" which does what no one has ever been close to do. That is, they stay on the ground.

Besides

Intel is already researching voxel rendering.
Nvidia is already researching voxel rendering.
And of course the countless of 3rd parties that are researching similar rendering methods.
I'm sure AMD is researching something similar, so if all of them discarded UDT it's probably because of a reason. I don't see the point of pressuring them into taking another look, when there's been no advancement. Going by what they say in the video, they couldn't reach the high ranks in those companies, which means thay did contact them, but probably were discarded by someone in the know, an actual engineeer/programmer, which usually is a low-ranked person within the company. That's the end of he story for me, until they show something.

*In fact the pyramids are nothing but a cheap trick IMO. They are nothing but the same object instanced/replicated several times forming a Sierpinsky pyramid. This (sierpinsky) is very important for 2 reasons, one is that memory footprint is ridiculously small compared to havng to represent with point clouds a world akin to i.e Crysis, BC2, Metro 2033... The other one directly affects the so called search engine and it really puts into question the real efficiency of the engine, since with this fractal organization, once one point has been "searched" for one of the objects (rare animal) you can "automatically" know where the same point is for the rest of the 2 billion objects, by running the simple fractal algorithm. Something that would not be posible or would need orders of magnitude more computing power with an actual game world data or even just if the animals were placed randomly instead of forming a very well known fractal strcuture.
 
Last edited:
@qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.

Unfortunately, there are no benefits to be had. It's a bit like saying of a 419 advance fee scammer, "Perhaps they will give me just a little bit of money if I send them some? Ok, let's do it!" Now, that wouldn't fly would it?

This outfit is touting infinities and there's your scam right there. They have no advanced graphics processing to show, just a scam to suck clueless investors in - and then disappear.

Bene's whole, detailed post puts it all very well, but the crux of it is right here:

The reason that I don't want to sign is that, until proven otherwise with a real time demo, I think this is just a hoax and I'm not going to support something that s not what they say it really is. Period.

If this outfit actually demonstrates some graphics advancement (they won't) then people like myself and Bene will be happy to sign that petition.

There's no giving "Just a little bit" to a con. Instead, spit it out and throw it away, just like the worthless spam that it is.

I'm sorry I don't have better news, dude, but blame the asshole who made up the scam, not your friends on TPU who want to avoid yourself and others from getting sucked in. :toast:
 
Fair enough Ben & qubit, fair enough.
I'll leave it to the maker to defend. I directed him to this thread (via email) to have a look at what you guys have to say and decide how open he wants to be with this thing. I can not dispute that it's authenticity is in question because of the very nature of the claim. Let's watch and see.
Here's hoping it hasn't been a colossal waste of effort.

EDIT: depending on his response, I will dump the petition if I see no proof myself.:)
 
Last edited:
Actually the more I got thinking about this, the more the description of the object became critical for cloud point data.

the first thing is, how do we generate points without polygons. Well I got thinking about that. You could in thoery had a fiarly complex rendering container and use a subdividing algorythm based on a bmp to create detail. Kind of like tesselation, but it might be different.

for instance you could save 2 axes of data. front and side. You could easily embed detail cordinate info in both of the ax's to form 3d images. I was trying to work out how much space you'd need to create such images but a basic bmp should suffice. in fact if you did it properly you could actually make somewhat larger files based on bitmap and encode all sort of info on them if you using 64bit structures.

you could put the x&y cordinates in rough the first 32bits. you could embeed the high of the pixel from z in 8 bits, you could embed the color info in the next 24 free bits.

If you moved to a 80 bit word per pixel. you could shove alot of detail onto a bimap.

now processing all this would be interesting. the reason you'd need the x y and z coordinate info would be to calculate light disprsement for POV to make the 2d picture appear to be 3d. once you give each pixel a color then it will render it. If you make the word 128b then you could embed everything in the bit map.thats going to make each bit map rather large. going to 256bitmap would allow you to introduce a linked framework for morhping and animation. The key is to store as much data as possiable per point. so figuring out the storage container is crucial. its easy to read 128 data but if you have a bitmap with a point of data for each pxiel say at 1980x1020 your talking 128bytes per pixel of data. Your talking minimum If my math is right and its really early somewhere around 300mb per image for a massively detailed image. this btw assumes a 1980x1020 3d image representing all the data possiable in a image using high detail.
 
Last edited:
Fair enough Ben & qubit, fair enough.
I'll leave it to the maker to defend. I directed him to this thread (via email) to have a look at what you guys have to say and decide how open he wants to be with this thing. I can not dispute that it's authenticity is in question because of the very nature of the claim. Let's watch and see.
Here's hoping it hasn't been a colossal waste of effort.

EDIT: depending on his response, I will dump the petition if I see no proof myself.:)

You're welcome, Infer. :toast:

Excellent - pointing him to this thread was an excellent idea! Now watch this con artist shy away... ;)
 
Ok, just had another look around the websites (they have two, you know. :rolleyes: ) and here's a few of the telltales that should set off alarm bells:

- Both web sites look extremely amateurish, like someone with no web talent knocked them up in two minutes flat. www.euclideon.com (cool-sounding name, I'll give it that) and www.unlimiteddetailtechnology.com Wot, a revolutionary graphics company can't knock-up a decent website? Really? Let's have a look at a nice boring, established one, shall we: www.nvidia.com See the difference?
- Tech has been in develelopment for "years", they are just putting the finishing touches on it now and it will be showcased/released "Real Soon Now". Yeah.
- Big talk of "investors". Nah, milking the marks for all they've got and then disappearing is what it's really all about.

And finally, this is the big one:

- Makes this impossible claim: It enables computers to display infinite geometry at real time frame rates. This is total bollocks. The fact it's "all done in software" just rubs it in. However it's done, it would require the computer to have infinite bandwidth, infinite power consumption, heck, infinite everything. An impossibility. Here, take one in the nuts guys: :nutkick: Assholes.
 
video was cool but shadows need a little work. this stuff usually gets stuffed from big company's .
 
Even if this technology was true (and it looks really stupid, because the pyramids with animals looked like they were made in Play Station 1 :P) I could not know what they are really talking about because this technology is only for people who use it in order to catch your attention and if you are stupid, they will catch yours for sure.
IF IT WAS INFINITE, THAT WOULD BE REAL LIFE GRAPHICS.

They still have to "draw" if you get what I mean, can you draw or render a life like giraffe? ( even a non interactive one) because these software developers can't either.

As for ps1, is the ps1 support about a million more polygons :laugh:

Did you look at stuff like trees or the floor?




by the way @ everyone I still can't freaken beleive your taking unlimited literally.

The point is it will only render as many points as there is pixels on your screen.

NOTHING else is rendered, if you put a car infront of say a person, the part of the person obscured by the car would NOT be rendered at all.

That is how you get "unlimited detail" as the only limit is the game designers willingness to add more detail, and also HDD space.

THAT is it.

If you had 10gb spare for a game you could have 10gb worth of "detail"*

if you had a 1tb spare for a game you could have 1tb worth of "detail" *

if you had 1000 tb spare for a game you could have 1000tb worth of "detail"*

well obviously some of that would be game code n shiz not all going to be the 3d data.

That is what is unlimited about it, the only limit is the design and the storage space.




@qubit specifcally, they say a demo will be out in 12-16 months on their second website in their press release ( posted september)
That's about normal really when a consumer demo is not top of your list of things to do. ( as it's game developers who will be most interested in this really)
 
Last edited:
Has anyone contacted a game dev company (like epic games, bethesda softworks, EA, blizzard, gearbox, etc.....) about this yet. Please do and please post!
 
In 16 months as the guy Dell says, he and his company fellows are gonna be rich, and MS's DX11 is gonna eat Dirt 3
 
Just as a thought, if they use a search engine-like system how would they work with transparency?
 
Just as a thought, if they use a search engine-like system how would they work with transparency?

now that i'm more awake:


thats along the lines of what i was thinking. this new rendering method doesnt seem to have any fancy features that we're used to. no shadows and lighting, no transparency, no fancy effects.


maybe they arent implemented yet, but what if they CANT be?
 
Well I spoke to Mr Dell. His response was enough for me to feel reassured that having faith in this technology is not a waste of time. I maintain to you all that this is worth it & I humbly request your signatures on the petition.
I understand if any of you feel concerned enough not to want to act without proof as yet, but I remind you that part of the petition's purpose is to reduce the time it'll take to get that proof to us "Joe the plumbers" (lol). I've rewritten the petition a bit so that those with concerns can feel a greater sense of not voting for something more than they mean to, like funding.

@hellrazor & Mussels: if you look at this you'll see that doesn't seem so:
http://features.cgsociety.org/story_custom.php?story_id=5615&page=1
Remember that this guy isn't introducing point clouds, he's introducing a more efficient way of rendering with point clouds that makes it viable for real-time rendering.
 
yeah but thats saying the technology exists to do this via software, slower than realtime.


its not saying the engine these other people have, is capable of doing the same in realtime.
 
c'mon people, has no one seen my post 157? It tells you all you need to know that this is a scam. It's a no-brainer.

Don't mess around getting sucked into petitions and stuff like that. This Dell guy has claimed "It enables computers to display infinite geometry at real time frame rates." There's no qualifier here, it means literally what it says, which is impossible. There's also the other stuff I pointed out, showing this is a hoax.

@inferknox: of course he's going to assure you to sign the petition. Why would he do otherwise? To then keep pushing for others to sign this stupid petition after I've exposed the scam is idiotic.

Nah, get him to prove it first, like everyone else with a new invention has to, not ask us to have "faith" in his new system and then he will "reward" us. :rolleyes:
 
@hellrazor & Mussels: if you look at this you'll see that doesn't seem so:
http://features.cgsociety.org/story_custom.php?story_id=5615&page=1
Remember that this guy isn't introducing point clouds, he's introducing a more efficient way of rendering with point clouds that makes it viable for real-time rendering.

That cgsociety article does not help at all, that article and many many others of that kind, is in fact, what detracts us from even slightly believing in his claims. From the article you should understand that it takes minutes to render a single frame with that kind of stuff, probably on a dual 8/12 core Xeon/Opteron workstation. What Dell claims is real-time, 24/30 fps, that is like idk 1000 times faster, on a single core, he even claims mobile phones. bah! Show it or stfu.
 
@Mussels: I pointed you there to show you the capabilities of point clouds. This tech is a means of rendering that faster.

@qubit: Such a thing as infinite could never literally be meant literally and I understood it from the start to be relative to current methods. I disagree with your notion of having "exposed" something, but I do agree that you made some good points in your concern. It was not that he reassured me, it's how, ie, what was said.
There is nothing for them to gain by faking this, so I don't feel that there's risk in supporting it. If they change their tune and start talking of money, I will turn away without hesitation.
(Just check the note I put at the bottom of the petition.)
 
This is fucking awesome! I really hope that this technology is as good as they say and takes things to the next level. If so things could open up to a whole new world of 3d!!!

We could possibly finally get more with less.... think about it....using the computational power of todays GPU's with this. We could do some amazing things!!!
 
This is fucking awesome! I really hope that this technology is as good as they say and takes things to the next level. If so things could open up to a whole new world of 3d!!!

We could possibly finally get more with less.... think about it....using the computational power of todays GPU's with this. We could do some amazing things!!!

If you're interested in it, please be sure to sign our petition. Just check my sig for it.:toast:
 
Here are some pics I found:
28i2dth.jpg

I call this a Jungle Puppy, because I'm not very good at naming things. You can see the high level of detail on the legs in the second picture, it's all real point cloud geometry, running in unlimited amounts in real time, and it is a software system.
 
the main problem here is everything you dig up is at 320x240 or 640x480 no real HD resolution to speak of Unlimited detail at old SNES resolutions maybe :roll: as far as what we can see from there images and demos
 
Some interesting comments I've come across:
A mix of this and polygons would be the best solution, use this method for background items that don't need hit detection and whatnot, and polygons for the stuff that does.
As it happens when the guy behind this posted on B3D about the tech. that's precisely the sort of early implementations he said he was pushing for. Having real 3D backgrounds rather than 2D skyboxes could be a cool use. A game like FF13 has a crapload of static backgrounds, it would sure look nice if they were all 3D rather than flat bitmaps.
source

I think one of the main problems mentioned is the speed, and the fact it is Single Threaded pure C code (no multi threading or intrinsics)

For some fast optimisation he could try and optimise it with OpenMP or something similar, it should help speed up several loops by balancing the load over several cores (doesn't work with all loops, but it only requires a #pragma and a tickbox check in visual studio If I remember correctly)

saying that, I am basing the above on a 2 year old beyond3d forum thread, so, he may have come a lot further since then.
What I find strange is that this is popping up all over the web again but there's nothing new in the demos? They're showing the same silly giraffe thing, in purely static environments with no lighting or shading just like they did two years ago. No progress in two years, sets my alarm bells ringing. That they're still not addressing the issue of dynamic lighting, animation and complex shading some 2 years later sort of points to them being major issues. Even if they are issues that simply can not be addressed then it still has potential if used in a hybrid engine with polygons handling the dynamic stuff. In fact that's precisely what id tech 6 is proposed to be based around. Static voxels for the environment and any static items, and polygons for characters and other dynamic items.

I didn't realise their algorithm was "currently" only single threaded, that's obviously an issue because from the numbers I've read its not fast enough for realtime graphics at decent resolutions yet and the single threaded performance on consoles is horrible (and probably still will be next generation). If its possible to make the algorithm massively parallel and run it on the shader units of modern GPUs, then we may be talking, but if its something that's going to forever rely on decent single threaded performance then its never going to get off the ground.
source

Lol, in all reality, this whole thing is probably fluff. I'm just hoping for the off chance that it isn't!:roll:
EDIT: At least I'm learning a tonne about CG thanks to looking deeper into it. Speaking of which, many people are talking about ID Tech 6.
 
Lol, in all reality, this whole thing is probably fluff. I'm just hoping for the off chance that it isn't!:roll:
EDIT: At least I'm learning a tonne about CG thanks to looking deeper into it. Speaking of which, many people are talking about ID Tech 6.

Why do you think we are so skeptic? I already mentioned that Carmack is working on something like this for id Tech 6, and he considered to include some related stuff like data compression algorithms in id Tech 5. I think MegaTexture has something to do with it. Here from the wiki:

Id has presented a more advanced technique that builds upon the MegaTexture idea and virtualizes both the geometry and the textures to obtain unique geometry down to the equivalent of the texel: the Sparse Voxel Octree (SVO). Potentially id Tech 6 could utilize this technique. It works by raycasting the geometry represented by voxels (instead of triangles) stored in an octree. The goal being to be able to stream parts of the octree into video memory, going further down along the tree for nearby objects to give them more details, and to use higher level, larger voxels for further objects, which give an automatic level of detail (LOD) system for both geometry and textures at the same time. The geometric detail that can be obtained using this method is nearly infinite, which removes the need for faking 3-dimensional details with techniques such as normal mapping. Despite that most Voxel rendering tests use very large amounts of memory (up to several Gb), Jon Olick of id Software claimed it's able to compress such SVO to 1.15 bits per voxel of position data.

However that's for id Tech 6 which comes after id Tech 5 which has not been released yet and will be first used on the game Rage to be released in late 2011. After this comes Doom4 using id Tech 5 too. Then id Tech 6. To make an idea of engine cycles here's a list of when the previous engines debuted, from the top of my head:

id tech 1 - 1996 - Quake
id tech 2 -1997 - Quake 2
id tech 3 - 1999 - Quake 3
id tech 4 - 2004 - Doom 3
id tech 5 - 2011 - Rage

id tech 6 would release after 2015. From the id Tech 6 wiki article:

Preliminary information given by John Carmack about this engine, which is still in early phases of development, tend to show that id Software is looking toward a direction where ray tracing and classic raster graphics would be mixed.[1] However, he also explained during QuakeCon 08 that the hardware capable of id Tech 6 does not yet exist.

Compare that claim coming from the greatest expert in graphics engines to the claim made by Mr. Dell... it just doesn't make sense. Like I said and as it's explained in the quotes above, this kind of representation of worlds requires huge ammounts of GB and although you can compress them a lot, the hardware capable of decompressing it, streaming it and calculating it on the fly does not exist yet. It's either:

a) huge ammount of GB (= huge bandwidth required), lower CPU requirement
b) high data compression, huge CPU power and memory bandwidth required
 
Status
Not open for further replies.
Back
Top