# Unlimited Detail Technology



## AphexDreamer (Mar 11, 2010)

I thought this was very interesting. I did a search in the forums and nothing came up for it so I
m sharing it.

Using something called Point Cloud Data (which kicks are current method of making Video Games in the arse) they are able to have unlimited detail in games with no object pop up. Instead of using polygons it uses points giving rather overwhelming detail in objects. Well just watch the video and be amazed. 

http://www.youtube.com/watch?v=Q-ATtrImCx4

Hope this follows through. 

http://unlimiteddetailtechnology.com/description.html

*Latest Update (2011)*










Euclideon & Unlimited Detail - Bruce Dell Interview









Offical Website I take it http://www.euclideon.com


----------



## DirectorC (Mar 11, 2010)

Holy crap that's amazing.  Tessellation just got stabbed in its stupid face.  So did all these big expensive GPUs?!


----------



## AphexDreamer (Mar 11, 2010)

DirectorC said:


> Holy crap that's amazing.  Tessellation just got stabbed in its stupid face.  So did all these big expensive GPUs?!



Exactly, I can't help but think its all for the Money. This man did it all through software, if thats what can be done through some Heavy Coding then wake up people.


----------



## Nick89 (Mar 11, 2010)

I like how it makes the scenes, like a search engine. That seems more efficient than the current system.


----------



## pantherx12 (Mar 11, 2010)

Seems like a logical and possible  way of doing things actually.

Good luck to the guys behind it is all I say, it only benefits us as gamers : ]

I'm guessing when ATI or NV got the call from them they shat bricks about going out of business themselves.

Thinking short term rather then long term as usual, what is it with companies doing that D:


----------



## kid41212003 (Mar 11, 2010)

So, basicly, it only renders what you can actually see, and it's equal your screen resolution. Like a 2d game.


----------



## DirectorC (Mar 11, 2010)

pantherx12 said:


> I'm guessing when ATI or NV got the call from them they shat bricks about going out of business themselves.



It probably helped spawn the current new technologies in place that can put the power of GPUs to use in executing regular code...



kid41212003 said:


> So, basicly, it only renders what you can actually see, and it's equal your screen resolution. Like a 2d game.



Uh both methods make a 3D model into a 2D image buddy.


----------



## AphexDreamer (Mar 11, 2010)

There is a very good comparison Video on his web site up for download thats just amazing. I really want to use this. Its just DX11 for all.


----------



## Phxprovost (Mar 11, 2010)

There are two possible outcomes to this tech
1. Its a scam and will be forgotten
2. Its real and the any one of the juggernauts in the industry buys it and buries it never to be heard of again

so dont get your hopes up 

while OT: This looks amazing and has so much potential but i foresee problems.  Mostly in the realm of directional natural lighting and physical interaction between objects.  Honestly i think something like this has the most to gain in the medical field.  Imagine a model of a human heart the becomes increasingly more complex the further a student zooms into it.


----------



## pr0n Inspector (Mar 11, 2010)

Screenshots are tiny and of very low quality. He can say whatever he wants about this tech.


----------



## AphexDreamer (Mar 11, 2010)

Phxprovost said:


> There are two possible outcomes to this tech
> 1. Its a scam and will be forgotten
> 2. Its real and the any one of the juggernauts in the industry buys it and buries it never to be heard of again
> 
> ...



Yeah about your two points I agree and thats what I'm afraid of. As for the Physical interaction between objects I thought about that too, but I don't think its as big of an issue as we take it. Might actually be easer to do since these points can be taken apart and reassembled far easer than polygons can.


----------



## Frick (Mar 11, 2010)

Interesting, but it seems the biggest quetionmarks are animation, physics and lightning. It will be interesting to follow though.


----------



## AphexDreamer (Mar 11, 2010)

Frick said:


> Interesting, but it seems the biggest quetionmarks are animation, physics and lightning. It will be interesting to follow though.



Yeah its in the experimental stage and is phenomenal still.


----------



## heky (Mar 11, 2010)

Interesting, maybe we will see some good from it.


----------



## arroyo (Mar 11, 2010)

Nothing new:
http://www.tomshardware.com/reviews/voxel-ray-casting,2423-2.html

Probably it's just ray casting rendering, the same as in Outcast game.


----------



## pantherx12 (Mar 11, 2010)

Phxprovost said:


> 2. Its real and the any one of the juggernauts in the industry buys it and buries it never to be heard of again
> 
> .




If it is real the guy seems to have enough sense to not just settle for a quick buck, he'd make far more money and continue to have control over the tech if he keeps hold of it himself.


----------



## Mussels (Mar 11, 2010)

curiouser and curiouser.


----------



## AphexDreamer (Mar 11, 2010)

pantherx12 said:


> If it is real the guy seems to have enough sense to not just settle for a quick buck, he'd make far more money and continue to have control over the tech if he keeps hold of it himself.



He says in one of his videos that he will either go with ATI or Nvidia with this and if neither of them want it he is going to go at alone.


----------



## erocker (Mar 11, 2010)

We won't see anything from this for quite a while. I'm pretty skeptical on their "software" and "special super complicated algorithims". Of course, info coming out of the mouth of the person trying to sell it is going to be all magic and rainbows. We need a conciensious 3rd party to have a looksee at this tech. I'm definitely not holding my breath.


----------



## AphexDreamer (Mar 11, 2010)

erocker said:


> We won't see anything from this for quite a while. I'm pretty skeptical on their "software" and "special super complicated algorithims". Of course, info coming out of the mouth of the person trying to sell it is going to be all magic and rainbows. We need a conciensious 3rd party to have a looksee at this tech. I'm definitely not holding my breath.



Yup you have a very valid point, I'm just thinking man if what he shows in his videos are real


----------



## FordGT90Concept (Mar 11, 2010)

Nick89 said:


> I like how it makes the scenes, like a search engine. That seems more efficient than the current system.


Remember that Google has dozens of super computers around the world performing those searches on enormous banks of memory and hard drives.  The exact number and exact power is a trade secret (it is known to be enormous, however).


Generalizing a pixel takes more processing power than rendering a polygon because to me, it sounds like the polygons are still there but that's not what it commits to the display.  It is an extra step to an old method that basically has the same effect of anti-aliasing so, I agree with erocker.  It is "magic and rainbows" until it has reached the market (assuming it does).


Oh, and it isn't "model swapping," it is "level of detail."  Those are the same models but they are using fewer or more polygons depending on how large the object is on the screen.


----------



## newconroer (Mar 11, 2010)

They'd do themselves a massive marketing favor, if they recreated scenes from popular games with this technology. A compare and contrast of something tangible and 'real world,' would be a lot more fruitful than yet another tech demo.

ATi and Nvidia have awesome tech demos, but the games don't look like that.



FordGT90Concept said:


> Generalizing a pixel takes more processing power than rendering a polygon because to me, it sounds like the polygons are still there but that's not what it commits to the display.  It is an extra step to an old method that basically has the same effect of anti-aliasing so, I agree with erocker.  It is "magic and rainbows" until it has reached the market (assuming it does).



Yes, I wonder if the polygons are still in place as some sort of base content.

This doesn't seem like something that can be independant and still drive graphics, but maybe in tandem or working simultaneously with current methods, they could bring things up a level.

For me, the graphics architecture we've become accustomed too, will not change until they can do active and real time vector type drawing, where each scene is drawn in real time, thus making the flexibility of things like physics, lighting and so forth, virtually unlimited.


----------



## Super XP (Mar 11, 2010)

This software based technology looks very nice and promising. In order for this thing to take off 3 companies need to be involved. 
1) Micro$oft, 
2) AMD/ATI & 
3) Intel. 
If they climb on board and MS even puts out a OS update to further support this new tech into their Direct X technology, it may very well be one of the best breakthrough tech's for gaming in existence. And we gamers will enjoy some fabulous unlimited 3D graphical environments. 

But something tells me this is going to get buried faster than you can say Polygon. :shadedshu

Is there a possibility to have this new software based technology integrated into games to perhaps take the load off massive polygon counts by redirecting polygons where they are most needed and just use this software for say backgrounds and such? There must be a way to combine both technologies IMO. Am I right?


----------



## ctrain (Mar 12, 2010)

voxel demos already exist that do similar looking scenes (ie seemingly impossible amounts of detail, at least if tried with polygons)

stuff in quake 1 also turned into crude point clouds at distances as well if i remember right.


----------



## Easy Rhino (Mar 12, 2010)

i am simply not buying into this concept until i see some real world examples...


----------



## Mussels (Mar 12, 2010)

Easy Rhino said:


> i am simply not buying into this concept until i see some real world examples...



i dont think it matters if any of us buy into it... it only matters if Nv or ATI does


----------



## ctrain (Mar 12, 2010)

http://voxels.blogspot.com/

is pretty close

not quite the same, but...
anyway, the tech is plausible, but i don't see it running at any high resolution without some form of acceleration (or a whole lot of cpu cores at its disposal)


----------



## Easy Rhino (Mar 12, 2010)

Mussels said:


> i dont think it matters if any of us buy into it... it only matters if Nv or ATI does



by "buy into" i mean believe is the next big thing...


----------



## TIGR (Mar 12, 2010)

I don't think ATI or nVidia would "buy into it" since by its nature, this would hurt business for them pretty badly.

If this can bring us photorealistic gaming within two years, awesome. If it's vaporware, well that wouldn't really be a surprise. Truth be told, I'd almost prefer that it be vaporware, because while it'd be nice to have this kind of rendering available without the GPU power, I _like_ the fact that rendering complicated scenes is computing-intensive simply for the effect it has on driving the progress of technology. The need for more and more GPU power to make sure games get increasingly visually stunning year by year has produced fantastic progress that has an effect even outside of gaming in the present (i.e. Folding@home), and which is a part of making new kinds of virtual experiences possible in the future.


----------



## Mussels (Mar 12, 2010)

Easy Rhino said:


> by "buy into" i mean believe is the next big thing...



so did i


----------



## Meizuman (Mar 12, 2010)

http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419

This was posted in Jan 30 2009


----------



## AphexDreamer (Mar 12, 2010)

Meizuman said:


> http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419
> 
> This was posted in Jan 30 2009



Yeah but not everyone knew about it here.


----------



## Steevo (Mar 12, 2010)

We could floating point 1920X1200 pixels as 32bit color depth.

2304000 pixels per frame
73728000 raw bits with no overhead per frame
4423680000 raw bits 60FPS

We still have no order to the pixels, just pixels.

So how deep do we want to calculate for a field of view? I know it will be a relative value, but how far? How far do you want to see for a sniper shot? a field of view equivalency of 20X

Lets see what a 16 bit stack will get us. from -32768 to 32767 apply those to pixels of depth, that allows for 65535 Z bit draw depth. So each pixel must have 32 bits of color information, 16 bits of position information, and then we have to add vectoring for motion calculation to any moving items, 



Face it, we are well beyond what a CPU is capable of approaching it this way. Yes there are programs out there that are 64Kb and have pretty fractal patterns and techno sounds, but fuew or none that you interact with on the same level as a 90's game.


----------



## Meizuman (Mar 12, 2010)

AphexDreamer said:


> Yeah but not everyone knew about it here.



Not ment to sound harsh... I just did some googling since this thing has surfaced in many forums, and found that.


----------



## Easy Rhino (Mar 12, 2010)

Mussels said:


> so did i



well then it that case it does no matter what ati or nvidia thinks about this tech. if it doesnt exist and this guy is just making up a load of crap then the point is moot. if the tech does exist and is as awesome as the guy says it is then any number of companies would grab onto it even if ati/nvidia did not. intel has their own graphics solution, matrox is still out there and god only knows the amount of partners out there looking for a piece of the pie. and not even they would buy into it unless they saw some real world examples.


----------



## pantherx12 (Mar 12, 2010)

Remember this is just a video for us, he would have an actual live demonstration if he had a meeting with a company.

Still don't think ATI/NV will buy into it as they've invested so much in the polygon system.

Since everyone thinks short term instead of longterm these days it probably will fall flat on its face even if its 100% real and could work with animation etc also.


But, if he manages to get funding from another source, as people mentioned this would be insane for medical use so he should try in places like that.


----------



## crazyeyesreaper (Apr 24, 2010)

as far as rendering what you see im sorry to say in 3d industry in general thats the way it works so that aspect of rendering only whats on the screen has been around awhile and in theory this will work it as of now? not a chance given 5 years to 6 years time aka 2 more cpu cycles and 3-4 gpu cycles it will become a viable method. sigh* i still remember when this was just speculation and theory in college in my Computer animation bachelors courses.  Eitherway i can say it is a viable alternative this guys implementation i doubt will work but you always need a base to start with

ageia made physx nvidia bought it adapted it etc same will happen to this or as has been stated it will dissappear but not entirely. The reason why this wont dissappear is because in the movie industry itself you have smaller pieces of this kind tech at work already and has been for years its just evolution of 1 way of doing things


----------



## jimmyme (Apr 24, 2010)

thanks for the interesting post.
I dont get it, if hes running unlimited blah-blah-blah data. Why arent there unlimited models in all the videos? Instead of 13 pyramids of fugly looking things, umlimited pyramids of fugly looking things....??


----------



## crazyeyesreaper (Apr 24, 2010)

coding for something and using it effectively are two very very different things example i have a mel script coder make my tools in Maya 2010 for me but he has no idea how to use them or why i need them to do what they do.. he writes it i test it  same could be applied here  he was able to write the code but as he has written it dosent mean he understands how to harness it  a good example of this is solar energy we have the know how but we dont exploit it?  same applies here as i said this isnt something new the core aspects of this tech have been around for awhile and in some cases already implemented just very few know how to utilize it in any effective way


----------



## remixedcat (May 15, 2010)

this hopefully will get better.


----------



## remixedcat (May 15, 2010)

jimmyme said:


> thanks for the interesting post.
> I dont get it, if hes running unlimited blah-blah-blah data. Why arent there unlimited models in all the videos? Instead of 13 pyramids of fugly looking things, umlimited pyramids of fugly looking things....??



might be more of a technical person then a creative person???


----------



## AphexDreamer (May 15, 2010)

remixedcat said:


> might be more of a technical person then a creative person???



Which he mentions... He's all like I'm not an artist so you could just imagine what could be done with the work of a good artiest.


----------



## remixedcat (May 15, 2010)

maybe he needs one. hopefully we can see some real good examples of this. we would eventually need tech like this. I am remaining neutral and always pack my salt block when it comes to everything. but I am ready and willing to accept new tech if it is seriously innovative.


----------



## inferKNOX (Dec 19, 2010)

Have any of you heard about any progress concerning this technology?
I've been reviewing it and find it really fascinating. It would indeed be incredible if this technology to increase the performance of GPU processing in the magnitude of a thousand times.
Surely at least AMD, who claim to want open standards and the progression of technology even if it doesn't necessarily mean personal gain, should see promise in this and act upon it?


----------



## remixedcat (Dec 19, 2010)

Point cloud data is allready use in LIDAR tech 

LIDAR and other scanning tech is using FRICKIN LAZOR BEAMS to scan everything. 

Makes 3d point cloud models of objects you scan. Scan in a sword, you've got a very realistic model of it and at natrual scale.

on a HUGE MOFO SCALE:

NAVTEQ is using this to literally make a 3d model of cities. Some of this is allready used on Bing Maps Silverlight edition. But to a wayyyyy lesser extent. They want to map the world like this and it would be awesome. Microsoft is helping them with this tech as well. This is also good to tell exact distances of key attributes. Like bridge heights, exact distance between buildings, it's all there. They had an example of a guitar on a storefront and they were able to tell you the exact dimensions of it and if it would fit on another building's storefront. 

Some 3d artists are using this to make models, but using triditional polys instead of all-out point cloud data. pretty cool stuff. You ca buy one of those scanners but they cost an arm and a leg and your first born. I want one!


----------



## inferKNOX (Dec 19, 2010)

The head of UD Tech did mention in his videos that point cloud data is in use of laser scanning objects into 3D images, so I assume that he was talking about the likes of LIDAR?
When you say several times that they're "using this," do you mean the actual UD Tech or just a point cloud data system?


----------



## remixedcat (Dec 19, 2010)

Point cloud data, sorry for the mix-up. 

There's several ways to do the point-cloud data. Lidar is one of the many ways. There are way more scanning tech available for smaller models. I've seen it. Do want!

Those are just scanning tech to make it happen. 

Imagine getting LIDAR scanning data for a real size city like NYC and making a game take place in it it would be an exact copy of that city. That would be neat.

Same with any size objects. Gamers are demanding more realism from games and this would be the easiest way to do it for some objects. a 15 minute scanning and texturing session VS 6+ hours of modeling time. Much more cost effective. 

You can just use the data from the point-cloud scanning session and import it into the UDTech's engine. it would be treated as any other map/model format just like importing 3ds files into a regular game engine.


----------



## inferKNOX (Dec 19, 2010)

That does sound great, but that would take UDTech becoming a legitimate player in the gaming industry and judging by the timescale, ie when this was all talked about, it seems dead in the water.
I'm actually thinking about harassing AMD into acknowledging and progressing this technology. We need to get a petition or something going!

The reality is that we always going to need/want more power, regardless of UDTech or not. I believe that at most this could be a minor dent to graphics card companies initially, until people realise, "If my weak computer can do this, what will a powerful one do?!"

Come on guys, let's do something to push AMD to make good on it's claims to promote furthering of technology!


----------



## remixedcat (Dec 19, 2010)

Mabye AMD and Nvidia are trying to silence this because they know it would be too new for them and they would be scurred! They are scared shitless.

They are clinging to thier polys. and Milking them too!


----------



## inferKNOX (Dec 19, 2010)

No doubt they are. I however think (or would like to think) that the software could be incorporated somewhat on a hardware or driver level, thus benefiting one or both of them. I think if it's employed on a OS level that they could be debunked.
Also if Intel with graphics we all hate so much gets a hold of this and makes it their own, then trully the GPU makers could find themselves crying rivers.
I think only a preemptive strike, ie, some form of implementation in their drivers/hardware/whatever by AMD/nV could quell this potential upset for them; if it ever was/is to be a threat that is.

Don't you think that it could be possible to petition AMD (I keep saying them because of their claims that I mentioned) into it, or at least into looking further into the feasibility of injecting it into their... GPU technology suite?


----------



## remixedcat (Dec 19, 2010)

I may contact some companies shortly. I am not sure if I'm gonna get a response, but it's worth a try!


----------



## inferKNOX (Dec 19, 2010)

Let me know which & I'll join in 
(Unless you're talking about companies you have personal connections with or something.)

I really believe it's possible to get this acknowledged if we push a petition, or an emailing campaign.
It'd definitely be worth it if we manage to get it up and going!


----------



## remixedcat (Dec 19, 2010)

Exactly!!!! We shall go! Onward ho!


----------



## stinger608 (Dec 19, 2010)

remixedcat said:


> I may contact some companies shortly. I am not sure if I'm gonna get a response, but it's worth a try!





inferKNOX said:


> Let me know which & I'll join in
> (Unless you're talking about companies you have personal connections with or something.)
> 
> I really believe it's possible to get this acknowledged if we push a petition, or an emailing campaign.
> It'd definitely be worth it if we manage to get it up and going!



I would agree! this is some awesome technology, and should be noted soon. If it goes by the way side, the original designer will be lost in the shuffle, and probably never noted for his or her development.


----------



## KieX (Dec 19, 2010)

Too good to be true. But there's always hoping.


----------



## inferKNOX (Dec 19, 2010)

Ok, all that being said, where do we start?
In all honesty I can't think of any gamer that would be against this! I'm confident that GPU makers wouldn't suffer too much because I believe the load reduction is a bit hyped. Even if not that hyped, I do believe that applying various things like light dynamics, animation, physics, etc will have significant enough load to keep the GPU makers very relevant. Besides that, the scale of games would likely leap to this new standard, then plateau and continue to grow, increasing load continuously. It would of course be better to have a lower load from games, decreasing power consumption and freeing up resources for other things like folding, enconding, etc.
Basically I don't think GPUs  & such will reach obsoletion due to something that would improve game rendering efficiency such as UDTech.


----------



## remixedcat (Dec 19, 2010)

Yes! Yes! Yes! GPUs are indeed used in other ways to not just render those companion cubes, or sexy khajiit or that +50 staff of penetration! 

Serverbeach and PEER1 even use them for some of thier hosting services! (way cool!)

Folding, Video encoding, tomography, and financial data are being processed on GPUs! 

So, no! I doubt if any GPU company will be hurting anytime soon!


----------



## Swamp Monster (Dec 19, 2010)

Who likes this stuff, will probably like this software (demo).
http://downloads.guru3d.com/Agenda-Circling-Forth-GPU-particle-demo-download-2591.html
This is made all from little points.


----------



## wolf (Dec 19, 2010)

Looks very interesting but I'm not going to get my hopes up about it, not just yet.

It really makes absolute sense though because very little (perhaps even nothing) that occurs in nature has completely straight edges or flat surfaces, they are comprised of atoms (read: points).

I'd love to see a mix where what needs to be made of points can be, but polygons can still be used at the same time, given a lot of what we see in games are man made inventions.


----------



## inferKNOX (Dec 19, 2010)

I totally agree with you wolf in that this needs to be implemented alongside current methods, and according to the head of UDTech it can be.

I disagree on your saying you won't keep high hopes on it though. I think we as the graphics & tech enthusiast community need to push this thing so that the corporations don't cut it down and deny us what could possibly be an invaluable technology for fear of losing marginal profits. 

There have been a few ideas/innovations I've seen now on the tech scene that people have just sat back about to watch and ultimately allowed to fade into oblivion. Let us join together to not let this be another casualty of profiteering corporations that ultimately depress the progress that we would be benefiting from! Together we can pressure them to accept this, I do believe so.


----------



## remixedcat (Dec 19, 2010)

I have contacted one company (not disclosing name till I get anywhere)


----------



## erocker (Dec 19, 2010)

If anything this technology will get bought out by some company and we'll never hear of it again. It will be very slowly incorporated under a different name for maximum profit. We can't have low end cards with high end performance.


----------



## inferKNOX (Dec 19, 2010)

I have contacted Euclideon (the name that the Unlimited Technology guys go under) and told them that we're eager to push their cause.

@erocker: great performance on low end cards means nothing if there's mind-blowing performance on high end cards! 
This technology has the potential to revolutionise the idea of acceptable graphics completely and rewrite the rendering performance we expect out of our graphics hardware!


----------



## inferKNOX (Dec 19, 2010)

I've emailed AMD. Would you guys like me to put up what I said to them here?


----------



## AphexDreamer (Dec 19, 2010)

inferKNOX said:


> I've emailed AMD. Would you guys like me to put up what I said to them here?



Sure


----------



## inferKNOX (Dec 19, 2010)

AphexDreamer said:


> Sure


Tell me what you think, this is it:


> Dear Sir/Madam
> 
> I and other graphics enthusiasts are interested in the possibilities of Unlimited Detail Technology (unlimiteddetailtechnology.com). We would like to call on AMD who has often claimed interest in the furthering of technology which is open and everyone stands to benefit from. We would thus like AMD to recognise Unlimited Detail's potential and to possibly work with the developers to find a viable implementation of the technology to work with AMD's GPU technology suite, on a software and/or hardware level to deliver a greater graphical experience to your loyal customers. I request this, being a customer of AMD (and formerly ATI) high end products for many years and seeing Unlimited Detail as something both AMD and it's customers can benefit from.
> 
> ...


----------



## AphexDreamer (Dec 19, 2010)

inferKNOX said:


> Tell me what you think, this is it:



Fantastic mate! Well said! 

Now it all depends on AMD's intentions. Either they will refuse to see the potential (give some BS for not doing it) or recognize the true potential this has and act on it. I can only hope its the latter.


----------



## remixedcat (Dec 19, 2010)

Excellent! I hope to get somewhere with the companies I've contacted.


----------



## copenhagen69 (Dec 19, 2010)

amazing stuff there


----------



## inferKNOX (Dec 19, 2010)

damn it, I keep getting a message in my email saying:


> Subject: Undeliverable: Unlimited Detail Technology
> Delivery has failed to these recipients or distribution lists:
> 
> tech.support@crmprd.amd.com
> ...



Hmph... I've tried 4 or 5 times now, using 2 email addresses and different settings (http://emailcustomercare.amd.com/), but it keeps failing to send.... What now?

EDIT: just tried sending it from my email address directly to tech.support@amd.com and no failure notice so far.


----------



## stinger608 (Dec 19, 2010)

Hopefully the tech support email address will work InferKNOX! The customer care email may just be having some issues, and may be the reason for the failure to deliver. Hoping anyhow.


----------



## scaminatrix (Dec 19, 2010)

JF-AMD should be able to give you a working/insider e-mail address if you ask him very very nicely...


----------



## inferKNOX (Dec 19, 2010)

Thanks guys for backing my rather ambitious idea to have these bigshots take this thing seriously, it means a lot to me that you guys didn't just resort to being nay-sayers. Together we can do it!
Let's gain momentum and interested parties on our side so that we can get this DONE! 

Now, how does one go about setting up an online petition?

@scaminatrix: what do you mean by JF-AMD? Lol, think about it though, how do you ask AMD for an insider email address without having any email address to ask it from?


----------



## streetfighter 2 (Dec 19, 2010)

I'm not an industry big-shot AND I'm having trouble taking this seriously!  If their work is half as groundbreaking as it seems why does their logo look like it was made by a (not particularly talented) fourth grader?

Did anyone bother to check if they had their amazing algorithm patented?

EDIT:
I searched the USPTO for patents written by Bruce Robert Dell, without success.
Searching for patents with "point cloud data" in the title yielded some results (the most recent of which was filed in 2005):


 | PAT. NO. |	Title
1	|7,804,498|
Visualization and storage algorithms associated with processing point cloud data

2	|7,746,341|
System and method for parsing point-cloud data 

3	|7,420,555|
Method and apparatus for transforming point cloud data to volumetric data

4	|7,317,456|
Method and apparatus for transforming point cloud data to volumetric data

5	|6,608,913|
Self-contained mapping and positioning system utilizing point cloud data

6	|6,377,865|
Methods of generating three-dimensional digital models of objects by wrapping point cloud data points


----------



## Kreij (Dec 19, 2010)

I'm all for advances in new tech (especially algorithms for faster processing), and I applaud you gents for wanting to push this tech .... but what makes you think the big companies do not already know about this technology? It has, after all, been around for awhile.

Just an honest question ... not being a nay-sayer.


----------



## scaminatrix (Dec 19, 2010)

inferKNOX said:


> @scaminatrix: what do you mean by JF-AMD? Lol, think about it though, how do you ask AMD for an insider email address without having any email address to ask it from?



JF-AMD is a member here, he's a rep for the server side of AMD. He floats around TPU clearing up misconceptions and alleviating confusion lol


----------



## remixedcat (Dec 19, 2010)

people contacting companies: Use the contact forms on thier websites. use a professional email address in the input field. they will likely ignore @yahoo @hotmail @gmail , etc. Try to find a free provider that is relatively unknown. 

These kinda companies get trolled a lot. They have put up defenses. I've legitimately tried to contact Emergent Game Technologies about pricing and they flat out refused to reply to me. I've had zero luck with them.

Contacting Havok REQUIRES a company email. THEY WILL NOT RESPOND TO ANY FREE EMAIL ADDRESSES. it must be a game company domain email like r.cat@ea.com or something like that.

Most game related companies will have crap like this to overcome. 

Try to get a hold of a professional email address.


----------



## inferKNOX (Dec 19, 2010)

@streetfighter 2: Sorry, I'm not too... ah... patent concious. Could you try search Euclideon and see if anything turns up? According to euclideon.com, it seems that they're based in Australia, think this could be the reason for lack of patents/support?

@Kreji: Not doubting their awareness as much as wanting them to give some support and or explore viability. Basically don't want them to dump it just because it will indeed deliver and they're just scared of what that may mean.

@scaminatrix: ah, I see.


----------



## scaminatrix (Dec 19, 2010)

streetfighter 2 said:


> EDIT:
> I searched the USPTO for patents written by Bruce Robert Dell, without success.
> Searching for patents with "point cloud data" in the title yielded some results (the most recent of which was filed in 200*6*):



Fix'd!
This is what I was looking for. It seems very likely that this is going be infringing on someone elses patent then. Shame.


----------



## inferKNOX (Dec 19, 2010)

remixedcat said:


> people contacting companies: Use the contact forms on thier websites. use a professional email address in the input field. they will likely ignore @yahoo @hotmail @gmail , etc. Try to find a free provider that is relatively unknown.
> 
> These kinda companies get trolled a lot. They have put up defenses. I've legitimately tried to contact Emergent Game Technologies about pricing and they flat out refused to reply to me. I've had zero luck with them.
> 
> ...


Or... put your name down here: http://www.ipetitions.com/petition/amd_support_udtech/
(or both)


----------



## crazyeyesreaper (Dec 19, 2010)

dosent matter how good this is and heres why almost anything done in 3d is still at its root a polygon no matter if its a triangle or a quad some of the tech in this is used at the END of the pipeline to be blunt this wont cut development time on games or 3d in general for the most part it adds another step sure its possible that i can give a better image better visuals thats one of its strong points. The sad fact is tho before you get those points the models etc need to be made and just about every $1000-$50,000 piece of software in the Entertainment industry uses 1 of three things in terms of rendering a mesh.

Sub Division Surfaces there a semi combination of nurbs and polygons there never perfectly flat but there not as natural as a nurbs surface they function like polygons but allow select areas to be highly detailed while others can be sparse aka a knuckle on a finger could have 100k + polygons to show wrinkles while the rest of the hand might only be 10k polygons to show proper rounding etc. Its a go between thats seldom used.

NURBS surfaces are always round in someway if it curves it never has a straight edge so these are the closest to real life a 3d app gets in terms of  lower system usage for high lvls of base detail but they dont function or animate well like polygons and are far more complex to use. In general there the best to use to represent a real life object but there draw backs make them extremely limited.

Polygons - we all know these they come in many forms triangles quads ngons etc there what gets rendered fact is the above methods at render time in every 3d app ever made are simplified down to triangles.

This applies to the UDT any info put into it in terms of 3d if it has to be created its gonna be done with polygons nurbs etc. This is nothing new final renders for lighting in Maya already do this. but the meshes themselves are still triangulated. 

*for those that hate walls of text read below*

This tech is nothing new wont effect games and probably wont see the light of day anytime soon its basically a pipedream unless every major tech giant agrees to promote it aka Intel Amd Nvidia Microsoft Apple Autodesk, Pixologic, Adobe, yadda yadda


----------



## inferKNOX (Dec 19, 2010)

@crazyeyes: then let's get the ball rolling and have the giants recognise it. The construction of 3D models might not be faster, and this may be a style already used (in all honesty I don't know), but surely it has never been the case where such vast quantities of whether they be polygons or whatever could be rendered with the minimal load claimed by the maker of UDT, or else we'd be in graphic nirvana by now.

Edit: The petition sigs are coming in. Thanks guys, keep them coming, put a link in your sig, digg the petition page, like it on Facebook, anything to spread the word and get the numbers!


----------



## remixedcat (Dec 19, 2010)

Great, InferKNOX!!!! Let's get this moving and a shakin!


----------



## Marineborn (Dec 19, 2010)

ill make the burgers...lets rock n roll


----------



## qubit (Dec 19, 2010)

Yeah, I'm up for this. Let's roll!


----------



## inferKNOX (Dec 19, 2010)

for all of you on digg, simply go to digg.com and search for "petition amd" and you will see it and you can digg it!
EDIT: there you go: http://digg.com/news/technology/petition_for_amd_to_support_unlimited_detail_technology_development


----------



## streetfighter 2 (Dec 20, 2010)

scaminatrix said:


> Fix'd!
> This is what I was looking for. It seems very likely that this is going be infringing on someone elses patent then. Shame.


I believe you're referring to _System and method for parsing point-cloud data_ invented by Chang, Et al.  That patent only pertains to parsing point cloud data, not culling point cloud data for mapping an x*y*z space into an x*y viewport.

After skimming some of the patents I think that the newest one is:
_Method and apparatus for transforming point cloud data to volumetric data_ invented by Lee in 2007.  Based on the abstract it appears to be a method of mapping point cloud data from one x*y*z space into another x*y*z space.  (There are two such patents and I didn't bother to see why.)

The patent that looks the most like the one we're looking for is from 2005 and is _Visualization and storage algorithms associated with processing point cloud data_ invented by Graham, et al.

The other patents I listed are methods of creating point cloud data models of 3D objects.

We can't be sure that Unlimited Detail Technology will be infringing any patents without more info.



crazyeyesreaper said:


> This tech is nothing new wont effect games and probably wont see the light of day anytime soon its basically a pipedream unless every major tech giant agrees to promote it aka Intel Amd Nvidia Microsoft Apple Autodesk, Pixologic, Adobe, yadda yadda


As you said, the lack of software for designing/producing point cloud data models is clearly the biggest known impediment to this technology (but there are lots of unknowns).

On the other hand I don't really see how this technology needs direct support from AMD and Nvidia at this stage.  I'm assuming that Unlimited Detail Technology could write a rendering engine using their unique point cloud algorithm in OpenCL.



inferKNOX said:


> @streetfighter 2: Sorry, I'm not too... ah... patent concious. Could you try search Euclideon and see if anything turns up? According to euclideon.com, it seems that they're based in Australia, think this could be the reason for lack of patents/support?


I tried searching for Euclideon and nothing related came up.  The fact that they are Australian wouldn't prevent their patents from being searchable unless they didn't pay the extra few hundred dollars to make their patent international.


----------



## Bjorn_Of_Iceland (Dec 20, 2010)

arroyo said:


> Probably it's just ray casting rendering, the same as in Outcast game.


afaik, outcast used voxel space rendering..


----------



## inferKNOX (Dec 20, 2010)

streetfighter 2 said:


> I believe you're referring to _System and method for parsing point-cloud data_ invented by Chang, Et al.  That patent only pertains to parsing point cloud data, not culling point cloud data for mapping an x*y*z space into an x*y viewport.
> 
> After skimming some of the patents I think that the newest one is:
> _Method and apparatus for transforming point cloud data to volumetric data_ invented by Lee in 2007.  Based on the abstract it appears to be a method of mapping point cloud data from one x*y*z space into another x*y*z space.  (There are two such patents and I didn't bother to see why.)
> ...



As far as I'm seeing, those point cloud patents seem quite distinct in description to the what UDTech sets out to achieve, so I don't think that it'd be infringing at this point.
My reference to them being Australian is that they may feel less concerned about patenting at the moment, or believe themselves be unique enough, ie, difficult to describe, not to feel the need for patenting until their technology is complete or near completion.


----------



## AphexDreamer (Dec 20, 2010)

Petition Signed.

Imagine Minecraft with Unlimited Detail Technology... aoeirjaoijgoejgkjsego....


----------



## FordGT90Concept (Dec 20, 2010)

I think it would be a bad investment for AMD when they can't afford bad investments (unreleased processor architectures, weak economy, etc.).  As such, I can't sign the petition.


----------



## AphexDreamer (Dec 20, 2010)

FordGT90Concept said:


> I think it would be a bad investment for AMD when they can't afford bad investments (unreleased processor architectures, weak economy, etc.).  As such, I can't sign the petition.



Risky yes, bad maybe.


----------



## inferKNOX (Dec 20, 2010)

FordGT90Concept said:


> I think it would be a bad investment for AMD when they can't afford bad investments (unreleased processor architectures, weak economy, etc.).  As such, I can't sign the petition.



I honestly don't think AMD stands to suffer anything as long as they intergrate with it in an intelligent manner.

I've been in contact with Bruce Dell (the CEO in charge of UDTech) and he is very happy with our effort to promote support of UD.
I don't know quite how much he would be comfortable with me exposing of what he has said to me, so let me just say that this effort we're making is in the right direction and could make a significant difference in terms of helping the technology a reality for us all.
He also said that things have been progressing well in development and they have financial stability, so what we're out to achieve with this campaign is integration and acceptance in the industry.


----------



## Easy Rhino (Dec 20, 2010)

sorry but those graphics look terrible. and the company behind this needs to fix up their website. it looks like a teenager put it together while high on marijuana.


----------



## pantherx12 (Dec 20, 2010)

Easy Rhino said:


> sorry but those graphics look terrible. and the company behind this needs to fix up their website. it looks like a teenager put it together while high on marijuana.



As the description of the video states, they are not artists they are software developers, your lucky they even managed to make what they did.


----------



## Easy Rhino (Dec 20, 2010)

pantherx12 said:


> As the description of the video states, they are not artists they are software developers, your lucky they even managed to make what they did.



sorry, this just looks like snake oil to me and the poor graphics dont help. if this was a serious vanture he would have spent some money actually putting everything together (including his terrible web site) before revealing it to the world. i am reading a lot of technical mumbo jumbo about this being rubbish and i have to agree. and the term "unlimited detail" is impossible since we don't have "unlimited" processing power. but i dont want to rehash old arguments. this post from this website really sums it up.

http://www.rockpapershotgun.com/2010/03/10/unlimited-detail-wants-to-kill-3d-cards/



> I call bullshit too.
> 
> Someone above said that you obviously can’t store unlimited points in a computer. No, no you cannot.
> 
> ...


----------



## pantherx12 (Dec 20, 2010)

The point has been missed my friend, the point is its NOT unlimited, it only displays the informations that is required, for example on my monitor it would be 1680x1050 "points"

Anything that I wouldn't be able to see is not rendered at all.

( this mean it adaptively changes the point data so as you draw closer to an object you can get as close as possible and the shape/texture would not be reduced as in current rendering methods, it can do this because everything else that was viewable before going close is now not being rendered I.E you could zoom in and see the texture of a fabric with vastly superiour quality and realism compared to current methods, you could zoom in on someones face and actually see the pores etc)



Fucking clever as bullshit and MORE than possible.


----------



## Easy Rhino (Dec 20, 2010)

pantherx12 said:


> The point has been missed my friend, the point is its NOT unlimited, it only displays the informations that is required, for example on my monitor it would be 1680x1050 "points"
> 
> Anything that I wouldn't be able to see is not rendered at all.
> 
> ...



well considering this came out of no where in March and there has not been a single update to their site nor any additional information reported in the media i would say this genuinely is a hoax.


----------



## pantherx12 (Dec 20, 2010)

Easy Rhino said:


> well considering this came out of no where in March and there has not been a single update to their site nor any additional information reported in the media i would say this genuinely is a hoax.



Could be, won't deny that at all 

Just saying the tech is more doable than a lot of people seem to think.

My GPU can push 60+fps with 1 million particles on screen so even if this was done with coloured/shaded particles current gpus are not far off using particles to draw things.

( if ati gpuys supported glinterop I'd probably be above 200fps)

And this system doesn't even use real particles so would be even better.


I would need 	
1 764 000 particles or points to have a solid image using my monitor, not that farfetched at all : ]


----------



## The Witcher (Dec 20, 2010)

The speaker is ALISTAIR !!! from Dragon Age origins !!

Anyway, if this was true, I've a feeling that this guy will be dead very soon because "Some people" will lose their business + billions of dollars.

I would expect to this technology in real world after 10 or 15 years from now...good stuff like this always take ages to hit the consumer market....


----------



## Steevo (Dec 20, 2010)

1764000 points for one color, at one pixel depth, times 32 bit color? 56448000 bytes of color information, accurate position to within one pixel on a one dimension surface? Another 1764000 bits at 4 byte depth, 705600 bytes for location information. Lets assume a standard screen size at 1680 X 1050 of 22" diagonal. 

What is a pixel depth, lets say that we want detail to speck size to the same as a pixel height. .282 mm

Every time you want to draw or render a scene with that level of precision and you add one pixel depth at .282 mm, you add another.


7,059,200 bytes. Lets say you want to render 2000 pixels deep. you would be able to see 

14,118,400,000 bytes to render a depth of 56.4 cm or 1.85 feet that you could see into the game. Great for a game where you have a flashlight and its batteries are dieing. Bad for games where you want to have a real world render.

That is just one frame, no physics, no AA, object scaling. anisotropic filtering.


Add 60 times that of render and display at all new calculated depth. 847 104 000 000 bytes.

The problem with this is that EVERY pixel must be mapped, they use this for high detail single image objects as it does have high detail, it does have accurate rendering when on a single predefined machine. It is a horrible idea for any sort of 3d gaming or real time rendering.


----------



## inferKNOX (Dec 20, 2010)

Easy Rhino said:


> well considering this came out of no where in March and there has not been a single update to their site nor any additional information reported in the media i would say this genuinely is a hoax.



It is possible, it always is, but at this point Bruce isn't calling on any finances, so I'm not too quick to jump to that conclusion unless that changes and it seems suspicious.
It is quiet, but I'm told there is method to the madness apparently and cogs are turning behind the scenes.
For an update, look at the press release of their official page: http://euclideon.com/
The point of the petition is to have AMD work with UD and look deeper into viability of the tech, not to finance/buy it; so I don't think there is reason to be too hesitant in putting your signature down. Besides, I would like to believe that AMD would detect a fraudulent graphics tech and dump it if that is the case.

So please, join our petition.


----------



## Easy Rhino (Dec 20, 2010)

sorry but i will not sign that petition until i see this thing in action on its own. if it really was a viable technology then intel/amd/nvidia/via would all be spending their own money looking into its development. i dont even think these guys formed their own company yet. i mean, if you truly believe there is something behind a revolutionary technology and you have the brains to bring it to market then the first thing you do is form a company and file a patent so you have legal protection.


----------



## inferKNOX (Dec 20, 2010)

Easy Rhino said:


> sorry but i will not sign that petition until i see this thing in action on its own. if it really was a viable technology then intel/amd/nvidia/via would all be spending their own money looking into its development. i dont even think these guys formed their own company yet. i mean, if you truly believe there is something behind a revolutionary technology and you have the brains to bring it to market then the first thing you do is form a company and file a patent so you have legal protection.



But they do have a company, haven't you followed the link?
As for the patent, it takes time to have one... registered right? Could it not be in the pipeline?
Well, demos are promised, so I hope your skepticism will be appeased then.


----------



## erocker (Dec 20, 2010)

inferKNOX said:


> But they do have a company, haven't you followed the link?
> As for the patent, it takes time to have one... registered right? Could it not be in the pipeline?
> Well, demos are promised, so I hope your skepticism will be appeased then.



5 years at the earliest. Even so, companies seem more interested in ray tracing as the way forward. Graphics engines/software/hardware are planned out years in advance, if UDT isn't a part of that, it's going to take a long, long time as all these companies have their money into something else for the future.


----------



## crazyeyesreaper (Dec 20, 2010)

exactly and even ray tracing might get pushed aside for Photon Mapping no one can know for sure but time will tell and even then this tech wont be around anytime soon its kind of a lost cause.


----------



## inferKNOX (Dec 20, 2010)

erocker said:


> Graphics engines/software/hardware are planned out years in advance, if UDT isn't a part of that, it's going to take a long, long time as all these companies have their money into something else for the future.



Doesn't that make it all the more worthwhile to petition for it to be looked at then, so that if it is viable, it's recognised and work commences towards it's standardisation sooner?


----------



## crazyeyesreaper (Dec 20, 2010)

lol if you want people to take this seriously make Pixar do a 3d movie with the tech then it might get looked at a bit more quickly lolz


----------



## erocker (Dec 20, 2010)

inferKNOX said:


> Doesn't that make it all the more worthwhile to petition for it to be looked at then, so that if it is viable, it's recognised and work commences towards it's standardisation sooner?



Not really. In the end it's up to the powers that be if they want to put their money there. They definitely know about this tech already. It doesn't hurt though.


----------



## Easy Rhino (Dec 20, 2010)

i don't know what's worse, their old website or their new website. and one press release FTL


----------



## inferKNOX (Dec 20, 2010)

crazyeyesreaper said:


> lol if you want people to take this seriously make Pixar do a 3d movie with the tech then it might get looked at a bit more quickly lolz



According to Bruce, the tech is incomplete and in development, so I don't know if it's possible to implement at this point.



erocker said:


> Not really. *In the end it's up to the powers that be if they want to put their money there*. They definitely know about this tech already. It doesn't hurt though.



The point of the petition is not have big players put money into UD, but to have AMD work with them to test viability and push standardisation if it is.


----------



## Benetanegia (Dec 20, 2010)

crazyeyesreaper said:


> lol if you want people to take this seriously make Pixar do a 3d movie with the tech then it might get looked at a bit more quickly lolz



http://features.cgsociety.org/story_custom.php?story_id=5615&page=1

Pixar has used point cloud rendering, to an extent. Read.

I don't know what to think of this Unlimited Detail Tech. The idea of the renderer seems kinda plausible, the supposed implementation of their specific point based rendering method seems plausible, but like people above mentioned, you just can't have unlimited (or really huge) point data, just like you can't have unlimited vertex data. In fact, I think point data can only be much worse than existing methods in regards to detail-per-memory usage, vertex/polygons are an aproximation of the "real" surface, NURBS and patches are also aproximations with a higher detail but slower and with their own drawbacks. The "real" thing will be much much bigger than aproximations, no matter how intelligent the "search engine" is in using only the 1680x1050 or 1920x1200 (or whatever) points that are required, the raw point data of the surfaces is going to be HUGE. Just think about walls, 4 vertex vs how many points would be required??

Point based rendering is indeed used in off-line rendering and it has many advantages in the areas where it's used (movies), where storage data is limitless and rendering/production time is limited (all relative to the timeline and level of detail). But real-time rendering is practically the opposite, storage is limited.

Then there's animation...

I'll remain skeptic until I see a real time demo from them.

All in all, I think that modern real-time ray-tracers, where a data extructure similar to the one mentioned by UDT is used, but instead of points with polys, have a better chance of becoming the future. And Carmack already said that such ways of handling data is not limited to ray-tracing or in this case point rendering, they can be used on rasterizers and he is working on them for id Tech 6, in fact. Accordingly, the "search engine" mentioned by UDT could very well be implemented in polygons. Ultimately until we have unlimited CPU/GPU power a polygon will always be better than points for real-time IMO.


----------



## inferKNOX (Dec 20, 2010)

Benetanegia said:


> http://features.cgsociety.org/story_custom.php?story_id=5615&page=1
> 
> Pixar has used point cloud rendering, to an extent. Read.
> 
> ...



If you're unsure, then it's worth signing the petition. Why? Because then AMD can prove or disprove it and we can be in ecstasy or at ease.
Definitely, I don't believe the idea of it being unlimited myself, but if you compare it to current methods, it's relatively unlimited. Even if it is limited to 1000x current geometry, isn't that worth it? Even it's it's 100x? I think we have far more to potentially gain than lose from looking into this, and same goes for AMD.

I too am quite skeptical despite my tone throughout the thread, but I'm just considering the benefit to risk ratio, in which case it definitely leans towards benefit in a huge way.


----------



## pantherx12 (Dec 20, 2010)

Even if it's 4x it's still a massive jump.


I can see this being used in combination with traditional methods ( I.E like how tesselation works, as you get close to object point cloud takes over so you can have super detailed objects ?)


----------



## erocker (Dec 20, 2010)

inferKNOX said:


> If you're unsure, then it's worth signing the petition. Why? Because then AMD can prove or disprove it and we can be in ecstasy or at ease.
> Definitely, I don't believe the idea of it being unlimited myself, but if you compare it to current methods, it's relatively unlimited. Even if it is limited to 1000x current geometry, isn't that worth it? Even it's it's 100x? I think we have far more to potentially gain than lose from looking into this, and same goes for AMD.
> 
> I too am quite skeptical despite my tone throughout the thread, but I'm just considering the benefit to risk ratio, in which case it definitely leans towards benefit in a huge way.



It's still going to cost AMD money. Money they don't have. Bene has a good point. We need a demo.. Some hard proof that this indeed can work.


----------



## Benetanegia (Dec 20, 2010)

inferKNOX said:


> If you're unsure, then it's worth signing the petition. Why? Because then AMD can prove or disprove it and we can be in ecstasy or at ease.
> Definitely, I don't believe the idea of it being unlimited myself, but if you compare it to current methods, it's relatively unlimited. Even if it is limited to 1000x current geometry, isn't that worth it? Even it's it's 100x? I think we have far more to potentially gain that lose from looking into this, and same goes for AMD.
> 
> I too am quite skeptical despite my tone throughout the thread, but I'm just considering the benefit to risk ratio, in which case it definitely leans towards benefit in a huge way.



You didn't understand what I meant. IMO it's as limited if not more than current implementations. Maybe not on the real time rendering side of things (skeptic until I see something), but a game world created in the manner described by them (even with replication) would occupy Terabytes if not Petabytes. Like I said the areas in which point clouds are used (i.e medical imaging, movie creation) do not have any problem with data size, since they don't have to care about distribution, bandwidth limitations, etc. Unlike games.


----------



## inferKNOX (Dec 20, 2010)

Yeah, I see all of your points. A demo is essential to make the talk more plausible. It is indeed hard to imagine just what has been done to achieve this, thus bringing the whole thing into question. Our failure to understand it should not hinder us to encourage those that will though, and if AMD get's in on it, that should happen.

AMD is financially low, yes. However, it should not be a significantly expense to validate the claim and cut it loose if it's false, thus I still see petitioning them as a positive move, don't you agree?


----------



## Thatguy (Dec 20, 2010)

Benetanegia said:


> You didn't understand what I meant. IMO it's as limited if not more than current implementations. Maybe not on the real time rendering side of things (skeptic until I see something), but a game world created in the manner described by them (even with replication) would occupy Terabytes if not Petabytes. Like I said the areas in which point clouds are used (i.e medical imaging, movie creation) do not have any problem with data size, since they don't have to care about distribution, bandwidth limitations, etc. Unlike games.



    thats not entirely ture. Using certain types of data compression could reduce the size of the object immensly. certain encryption algorythms come to mind here. If we had a fast way to place, decrompress, discover and then renders those points. It could become very doable. the big thing this new IDEA looks to do is use a search algorythm for the camera angles to determine which points need rendering. I agree storing that much cloud point data would be difficult. Petabytes might be a understatement. Maybe he figured out a better way to compress the point maps, which is very beliveable. 


   Which on its face looks like a great way to conserve computer power. 

   I don't know. they don't give enough detail about How the tech works to get a good grasp on what they are selling.


----------



## qamulek (Dec 20, 2010)

I want a benchmark where I can vary the amount of points used in the models to graph the performance vs the number of points on the screen.  The reason is to test the claim that the amount of points doesn't affect the performance(that much) using this UDT.  It would be interesting to see the performance stay roughly constant while increasing the number of points or possibly zooming out to see a bunch of models then zooming in to one very detailed model then zooming so close that the skin becomes a canyon like landscape.


----------



## remixedcat (Dec 20, 2010)

Storage is getting cheaper. GPUs aren't.


----------



## Benetanegia (Dec 20, 2010)

Thatguy said:


> thats not entirely ture. Using certain types of data compression could reduce the size of the object immensly. certain encryption algorythms come to mind here. If we had a fast way to place, decrompress, discover and then renders those points. It could become very doable. the big thing this new IDEA looks to do is use a search algorythm for the camera angles to determine which points need rendering. I agree storing that much cloud point data would be difficult. Petabytes might be a understatement. Maybe he figured out a better way to compress the point maps, which is very beliveable.
> 
> 
> Which on its face looks like a great way to conserve computer power.
> ...



Yes, but my point is that any algorithm or compression technique that could benefit point cloud storage would be just as beneficial to vertex data storage and transmision. At least with current computers, you don't want to compress everything too much anyway (beyond the level that's alredy compressed I mean).

The only really big thing about this tech is the search engine without a doubt, if it really does what they say it does and in the way it does it, and I think that it could be implemented just as well to polys too. Like I said Carmack is supposedly working on something similar for rasterization, I think based in the sparse voxel octree ray-tracing that so many people are working on, which btw does something similar to what this "search engine" is supposed to do, but again based on polys. So far the sparse voxel octree implementations have been promising but have failed to completely deliver on their promise, and they have never been even close to the claim made here, both in the ammount of data which can be represented in real time at any given time, nor in the performance that is achieved. 

And to me that's the problem. I am not an expert by any means, but I do read a lot and try to know about those things as much as I can and what UDT is promising sounds just like time travel. The fact that despite having videos almost 2 years ago, but they don't have even the most simplistic demo yet, does not make believing in their claim any easier.



qamulek said:


> I want a benchmark where I can vary the amount of points used in the models to graph the performance vs the number of points on the screen.  The reason is to test the claim that the amount of points doesn't affect the performance(that much) using this UDT.  It would be interesting to see the performance stay roughly constant while increasing the number of points or possibly zooming out to see a bunch of models then zooming in to one very detailed model then zooming so close that the skin becomes a canyon like landscape.



Ray-tracing is pretty much independent from the ammount of geometry used.


----------



## Thatguy (Dec 21, 2010)

Benetanegia said:


> Yes, but my point is that any algorithm or compression technique that could benefit point cloud storage would be just as beneficial to vertex data storage and transmision. At least with current computers, you don't want to compress everything too much anyway (beyond the level that's alredy compressed I mean).
> 
> The only really big thing about this tech is the search engine without a doubt, if it really does what they say it does and in the way it does it, and I think that it could be implemented just as well to polys too. Like I said Carmack is supposedly working on something similar for rasterization, I think based in the sparse voxel octree ray-tracing that so many people are working on, which btw does something similar to what this "search engine" is supposed to do, but again based on polys. So far the sparse voxel octree implementations have been promising but have failed to completely deliver on their promise, and they have never been even close to the claim made here, both in the ammount of data which can be represented in real time at any given time, nor in the performance that is achieved.
> 
> ...



   well actually you could in thoery use a folding algorythm to compress the data without to much computer penalty greatly reducing the amount of data points to be stored. Sort of like a bitmap with depth information buried in it.

   Theres a few ways I could see accomplishing this. But none of them come without some overhead penaltys and certainly not in software alone. It did get me thinking in some new directions on some problems I have been trying to solve on my own, even if they don;t directly apply.


----------



## qubit (Dec 21, 2010)

I agree with those that are saying this is snake oil - a con.

I saw something similar a while back for extreme data compression (I forget the name of the company now). It was reported (skeptically) by New Scientist at the time. The company were going to exhibit at some science/computer show and "reveal all" there. Sure.

They mysteriously pulled out just before the show and were never heard of again. What these charlatans were trying to achieve is beyond me. It's like the ultimate troll, isn't it?


----------



## crazyeyesreaper (Dec 21, 2010)

and true this has been used in offline rendering but not a pure cloud point data system its always relied on something else its an end of the pipeline situation  example they use highly complex models which border on the millions of polygons but as i stated early no matter if its cloud point data thats rendered its still all done in polygons first  or nurbs or what have you. Basically no matter what this tech does or dosent do or what it it becomes. Polygons will never be replaced because there easy to calculate mathmatically. Even now in 3d apps im not limited by gpu power or cpu power im limited in Ram Vram and HDD space and speeds. Zbrush uses a 2.5d system but can get into the billions of polygons and be 100% fluid to work in. Mudbox with only 4gigs ram and 1 gig Vram i can work with up to 50million polygons in real time with Occlusion shadows HDR and tonemapping.  Point Cloud Data is nothing more then a pipedream an interesting side note untill we have the power of IBM's Blue Gene in are living rooms.


----------



## inferKNOX (Dec 21, 2010)

Thatguy said:


> well actually you could in thoery use a folding algorythm to compress the data without to much computer penalty greatly reducing the amount of data points to be stored. *Sort of like a bitmap with depth information buried in it.*
> 
> Theres a few ways I could see accomplishing this. But none of them come without some overhead penaltys and certainly not in software alone. It did get me thinking in some new directions on some problems I have been trying to solve on my own, even if they don;t directly apply.



Bruce actually spoke of bitmaps, but he was talking about improvements in scalability vs polygons, removing the need to remake the graphics when going from a high to a low polygon model.

@qubit: If we sign the petition and it is indeed snake oil, AMD would quickly identify that and put this whole argument to rest. So still, reason remains to sign the petition.


----------



## crazyeyesreaper (Dec 21, 2010)

and if anyone actually paid attention to the Heaven bench Tessellation can actually provide that right now just do to consoles and limited high end gpus in the PC segment we all get to wait fact is Tessellation can be applied to entire scenes and with proper methods would eliminate the Level Of Detail changes we notice in games now. But with anything in the Tech world it takes time to advance and right now this is way to far out there to be viable any time soon.

Simple fact is we dont need to have tesselation applied like Nvidia is push or AMD fact is

every time tesselation is applied geometry increases 4x so 1mill polygons becomes 4mill 4mill = 16mill so on and so on image if you scale those tessellation lvls to distances from a character in a game but better yet things like buildings dont really need tesselation sure bricks might look nice but if your against the wall and can see the change dynamic tessellation can fix that. It all comes down to how tech is applied the detail you want to see is already available sadly like most of us here you have to wait for the rest of the world to catch up. This Point Cloud data is grasping at straws when its not really needed

examples most game texture files work with mip maps to conserve space at greater distances.

the same applies to normal and bump maps color maps tessellation can be done the same way it just hasnt

Good Example is below it could be applied that when say in Oblivion you talk to a person and there face is close up you get detail lvl 3 but say your a few feet away you get lvl 2 as you get further away it drops down to the next lvl so on and so on we dont really need Point Cloud Data to get the unlimited detail tech. Its already at your finger tips were limited in other ways then how many polygons a gpu can render etc etc






Dont get me wrong im all for new tech just not stuff thats not viable for the purpose your trying to apply it to
http://www.xbitlabs.com/articles/video/display/hardware-tesselation_3.html

an example below from the Link i posted above that change in poly count has only 1 fps penalty on a GTX 480 the big issue is there still using normal maps instead of a displacement map 
normal = faked it gives the illusion that something is more then it is aka bricks in a wall wrinkles in skin etc
displacement = real it displaces the geometry the more polygons there are the better the displacment
examples of displacment used in games = Uncharted 2 they use animated displacment maps to make believeable wrinkles in Drakes forehead as he talks or reacts etc etc this on old 7800 series hardware 

As i said that which is you want is already available your just stuck waiting for the 95% of the rest of the world that arent up to speed yet or have the hardware capable of it


----------



## inferKNOX (Dec 21, 2010)

Yes, it does indeed take time to advance, thus we (me and those that agree with me) want to have AMD (and/or other significant players in the graphics market) check the viability of this and accelerate development and facilitate standardisation if it is in fact viable.
I'm by no means a pro at this in any means, but as far as I see, the difference between what you're talking about and UD is the level of system load vs detail provided.

Isn't all the detail in the universe enabled by atoms? Then logically, wouldn't it follow that a point cloud system would make the closest virtual simulation of reality? And if a way is found, as is claimed, to process only visible points to fill a given number of pixels and thus concentrate processing power where it's needed, wouldn't that be the most efficient manner of rendering the graphics?

Guys, let's be serious here; this tech is very attention catching. Let's push through this petition and know once and for all if it's a revolution or a dud. Don't let your own personal doubts/misunderstanding of it colour your willingness to have it proven or disproved. Nobody can agree to funding it without a proper demo, true, but that's not what we want here, we want it simply checked out by industry leaders, that's all.

By the way crazy, no offense intended, but please could you use a bit more punctuation in your posts? I'm finding it difficult to understand you without it.


----------



## crazyeyesreaper (Dec 21, 2010)

lol everyone finds me hard to understand i only use punctuation in reviews 

I realize yes in theory it is the closest in terms of reality but since it dosent change anything we do in the process leading up to using UDT it means nothing changes..

Models still need to be created animated Texture files still have to be painted in this situation it makes no difference 

example only so much can be displayed via a pixel on a screen ... hold on i got an image somewhere showing what i mean...
Low Res









High Res









Basically the difference between those models is 6million polygons High res vs 10k or less on the low res
by subividing 1 time i can eliminate the jaggie edges or Nickeling as its called in 3d meaning that your point cloud data might = perfect rendition of reality but your human eye can make out every individual scratch or variation. The fact is the world of 3d even if you could make it perfect in terms of how it appears you wouldnt need Unlimited detail to do so another great example would be Gran Turismo 5 look at the Cars vs there Real life counter parts. Besides better lighting and reflection which could be easily done on todays gpus not the outdated 7800gtx your already at the lvl of detail your talking about. The human eye itself would be able to see the difference.


Gran Turismo 5









Real life


----------



## inferKNOX (Dec 21, 2010)

I think you might be going off at a tangent here. The focus of this technology is not so much point cloud data, as much as it is the detail to load ratio. The point cloud data is something that works hand in hand with it to produce the level of detail, but is not the essence of what is being focused on.
What the tech is promising is to hugely increase graphics detail over conventional graphics currently, whilst having minimal impact in terms of load over the current methods used. Think of it as AMD's MLAA, it's kinda like something for nothing.


----------



## crazyeyesreaper (Dec 21, 2010)

my point is if you look at the images above in terms of it all it comes down to optimizations

The PS3 is roughly if optimizations are the same nothing more then say a core 2 duo at 2.4ghz with a 7800gtx 256mb yet it gets THAT close in terms of hardware that almost whats accomplishable on an AMD Zacate or Intel i3 and in a sub 20w package. Its not really a tanget more a fact even if this tech comes out it will need massive gpus to render the highly parrallel computations for it something a CPU just isnt good at and even if it was managable on a cpu then what do you run the Artificial intelligence on and have do all the background tasks needed to make everything function as it should. Dont get me wrong it is a viable tech but only in offline rendering modes and in case by case situations.

to be blunt making stuff prettier dosent make the Ai smarter or able to counter to you most games the AI just gets bonuses that if it were person vs person would be considered cheats new rendering modes better graphics are one thing but to be honest i dont think graphics are really lacking whats lacking is immersion in terms of games. The games now play themselves ala Black Ops where the person does nothing as the AI dosent really think it just has This situation happened lets respond with action B. in racing games the AI can do what a person cannot etc etc. We have many other areas that need improvement not just visual fidelity as it has been figured already that the gpu power needed to run holograms should be viable within 10 years not for consumers no but the power at are finger tips will be there by that time. Thus a tech like this just dosent really hold water when looking at that senario and or goal. Everything is a full package and has to be balanced. Graphics have reached the point were 5-10% better visuals require 50% more rendering power but everything else has stayed the same.


----------



## inferKNOX (Dec 21, 2010)

crazyeyesreaper said:


> my point is if you look at the images above in terms of it all it comes down to optimizations
> 
> The PS3 is roughly if optimizations are the same nothing more then say a core 2 duo at 2.4ghz with a 7800gtx 256mb yet it gets THAT close in terms of hardware that almost whats accomplishable on an AMD Zacate or Intel i3 and in a sub 20w package. Its not really a tanget more a fact *even if this tech comes out it will need massive gpus to render the highly parrallel computations for it something a CPU just isnt good at and even if it was managable on a cpu then what do you run the Artificial intelligence on and have do all the background tasks needed to make everything function as it should.* Dont get me wrong it is a viable tech but only in offline rendering modes and in case by case situations.
> 
> to be blunt making stuff prettier dosent make the Ai smarter or able to counter to you most games the AI just gets bonuses that if it were person vs person would be considered cheats new rendering modes better graphics are one thing but to be honest i dont think graphics are really lacking whats lacking is immersion in terms of games. The games no play themselves ala Black Ops in racing games the AI can do what a person cannot etc etc. We have many other areas that need improvement not just visual fidelity as it has been figured already that the gpu power needed to run holograms should be viable within 10 years no for consumers no but the power at are finger tips will be there by that time. *Thus a tech like this just dosent really hold water when looking at that senario and or goal.*



But therein lies your misconception. You believe this will increase load, but the claim is that it will hugely reduce it whilst increasing detail. To have this all work on the GPU is why you would want AMD there to work alongside the developers of UD, to optimise it for the GPUs and leave the CPUs free to compute AI, etc. And if the load is simply reduced on the GPU (compared to current methods) and the graphics are boosted to an indisputable level, maybe the remaining processing power of the GPU can be used for other tasks that would otherwise be factored out by priority due to how much of the GPU has to be devoted to rendering the graphics. Thus exactly just what you're asking for would happen, focus on other areas that are being overlooked for the sake to trying to find the balance between load and detail, would finally get the attention and working room they need.

The goal is to reduce load and increase output, ie, optimise. That being the case, it's universally applicable, especially real-time rendering.


----------



## crazyeyesreaper (Dec 21, 2010)

we would need it to reduce load at extreme resolutions were talking 4320p at those resolutions sure but when will we move to that tech most of the world is still standard def at 720x480 with some at 1280x720 very few sources today at actually 1920x1080 let alone anything higher. As i said GPU wise we should have the ability to produce something like the holodeck in 10-15 years. This tech dosent really get us there. 

But then theres the itchy physics issue can a gpu render this method at ultra high resolutions with close to real life physics interactions with objects and allow actual water simulation probably not. Its an interesting concept but i just dont see it happening. We are already at a stage where are eyes can see a huge difference in quality once we hit ray tracing there wont be much left needed in terms of getting close to reality except mass storage at faster speeds. which are already in the works

holographic storage is already usable in the non consumer space at 10 terabytes per square inch. This tech needed to have developer support 3-4 years ago to have made an impact it might only be my opinion but this is to little to late. Its got a niche in 3d for a final step in terms of the pipeline nothing more.

but again Point cloud data still needs a mesh of an extremely high resolution to offer that UDT so you still need an artist to make a 1billion polygon model texture it animate it etc etc so it can be rendered in a situation with UDT thats just no really possible not to mention you would need a game engine from scratch built on a non existant software platform that has enough industry support to work... example direct X wont run this Open GL wont either in terms of a real time situation so what do you base it on? Theres alot more here then meets the eye in terms of hurdles to overcome something i highly doubt will be possible in a time frame that actually matters.

This tech is to me alot like the game Project offset that Intel bought. Interesting concept has alot of potential but will never see the light of day.


----------



## inferKNOX (Dec 21, 2010)

I don't know if your 6970s never suffer lag, but my 5850 regularly loses sync with my monitor (<60fps) which is at 1920x1200, and if there's the potential for something to come along that could make me able to zoom in enough to see the colour of the eyes one of my units (in an RTS), all the while never letting framerates drop, I would want that.
Nevermind that, if I could get even 10x or whatever extra amount of detail you say is enough, all the while having the load on my system/gpu reduced from even 60% to 10% for example, thus reducing my power use, heat & noise; I'd jump at that too.
Or what about considering 3D? Sure my card currently manages to maintain vsync in most games, what about on a 3D capable monitor that's at 120Hz or 240Hz? If I could have a tech that would allow my card to easily maintain vsync with that _and_ look better, all the while putting a lower load than I had previously at 60fps, I don't think that's something I would even think twice about.

That is what UD is promising and possibly more!


----------



## Mussels (Dec 21, 2010)

inferKNOX said:


> That is what UD is promising and possibly more!



no, its not.



UD is promising one small thing under specific circumstances, without giving away what those circumstances really are. the rest is just pure speculation.


----------



## pantherx12 (Dec 21, 2010)

crazyeyesreaper said:


> This tech dosent really get us there.



Would if it's real man, you could look look at a book with tiny writing, to small to read in normal circumstances or texture limited these days etc.

But with this it would be able to be held close to the face and read normally : ]

All sorts of stuff like that.

Also everything you could see would be rendered so draw distance would be FUCKING EPIC


----------



## inferKNOX (Dec 21, 2010)

Mussels said:


> no, its not.
> 
> 
> 
> UD is promising one small thing under specific circumstances, without giving away what those circumstances really are. the rest is just pure speculation.



Not so, by proxy it is giving rise to such possibilities and possibly more. Current load from rendering is imposing a lot of limitations and I think we all know that.


----------



## Mussels (Dec 21, 2010)

inferKNOX said:


> Not so, by proxy it is giving rise to such possibilities and possibly more. Current load from rendering is imposing a lot of limitations and I think we all know that.



but we dont know jack shit about this.


sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)

how do we know there isnt limitations in this, that are even worse than current methods? we can see the same item from 50 angles, but can it show 50 items as well as current methods? how do we know its actually superior to what we have now?


----------



## pantherx12 (Dec 21, 2010)

Mussels said:


> but we dont know jack shit about this.
> 
> 
> sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)
> ...




I've seen animations using the tech : ]

Albiet shitty ones, but there is more previews if you hunt about.


----------



## inferKNOX (Dec 21, 2010)

Mussels said:


> but we dont know jack shit about this.
> 
> 
> sure i last saw the demo of this ages ago, but if memory serves it only showed the same things repeated over and over again (from different angles)
> ...



By petitioning AMD and/or others to look into it and work with UD to uncover the possibilities and identify the limitations.
Sign our petition.

EDIT: here's an animation preview:
http://www.youtube.com/watch?v=cF8A4bsfKH8


----------



## pantherx12 (Dec 21, 2010)

Well apprently we only need to wait 12-16 months to find out if this is real.

Also according to their last press release they already have investors (so not asking for more moneys) so at-least the snake oil idea is gone. As they're not trying to con anyone when they're asking for nothing.


----------



## inferKNOX (Dec 21, 2010)

pantherx12 said:


> Also according to their last press release they already have investors (so not asking for more moneys) so at-least the snake oil idea is gone. As they're not trying to con anyone when they're asking for nothing.



That's what I have been repeating over and over, but nobody seems to listen.


----------



## scaminatrix (Dec 21, 2010)

I must be the only person in the world still on 56k. FFS.


----------



## Mussels (Dec 21, 2010)

scaminatrix said:


> I must be the only person in the world still on 56k. FFS.



no, im sure at least two other people are. maybe three.


----------



## LAN_deRf_HA (Dec 21, 2010)

Being in it's infancy naturally there will be limitations. Someone has to put forth the time and money to give this a chance. Complaining that the baby you just pooped out can't walk, talk, and feed itself is pointless.


----------



## Thrackan (Dec 21, 2010)

So, if I get this correctly this tech basically skips the 3D step and goes straight to giving you a 2D image representing the 3D objects, based on angle and position, that you can actually see from that angle.
If this works like a 'search engine', when does indexing the search results take place? During development/compilation, so that the end user only has to read the point maps?

I'd like to see a demo with:
- Not 100x the same object, but 100 different objects. I reckon an Object Oriented approach is being used, which in case of replication, decreases load for multiple instances of the same object drastically.
- Textures. I wonder how colors and textures are being applied to such an approach.

If these two cases are viable, I'd support this tech.


----------



## HalfAHertz (Dec 21, 2010)

I very much prefer tessellation because it's a technology that is here and now and is proven to work. Game studios should just learn to apply it properly and stop worrying about backwards compatibility. I'm pretty sure there's enough Dx11 hardware to go around for studios to stop worrying about backwards compatibility.


----------



## qubit (Dec 21, 2010)

People are calling them out as frauds, because they're claiming to handle an _infinite_ amount of detail/data. As we all know, nothing in this universe can do that, so the claim is bollocks.

If they'd simply billed it as a new, hyper-efficient way of dealing with a huge volume of data giving say, a 100-fold improvement in rendering speed, then I'd buy it and look forward to the official tech demos and description of the technology.

But they didn't.


----------



## inferKNOX (Dec 21, 2010)

@Thrackan: You may have just about hit the nail on the head with that analysis.
I think the idea of it being a search-engine-like technology is limited to the idea of it finding what is supposed to show up in actual pixels on your screen and presents them & that the replication was just the limited artistic abilities of the designers as opposed to a necessity for replication. Of course i can't be sure though, it was just my understanding of it.

@Everyone else opposing:
I can't for the life of me understand the internal struggle that one would have with petitioning for something to be seen for viability when it has so much potential and no economic consequence. I've heard quite a few counter arguments now but have thus far not heard any (accurate) reasoning that validates skepticism that says it is a risky venture and not worth petitioning for. Seems the trend is to be stubborn for the sake of it. Let's go through some of the reasons:

- will take long to become relevant = all the more reason to see if it's viable sooner = petition
- methods that produce high detail available = current methods are load intensive =  petition
- financial risk for AMD = UD doesn't need sponsorship, just recognition & if AMD works hand-in hand, it can be optimised for GPU, AMD architecture, etc, thus keeping AMD relevant (unlike if they finish development independently and drop the bomb on everyone, leaving bankrupt casualties) = petition
- limitations of tech unknown = all the more reason to see if it's viable sooner = petition
- want to see demo 1st = AMD working with UD would accelerate development = petition
- it's up to industry bigshots = point of petition is to show customer interest/trend (which they take polls & spend money to discover normally) = petition
- undecided = if it's development is completed, one can make up his/her mind = petition
- no proof of not being a scam = All the more reason for AMD to check & right it off it it is = petition
- too good to be true = why not let AMD find out if that's trully the case? = petition

Why the inclination to fight it so hard when it'll benefit you in the end if it works?
If this becomes huge one day, wouldn't you like to know that you were one of the voices that made it happen?
Each of your signatures count, and together we can be heard and achieve somthing.


----------



## GSquadron (Dec 21, 2010)

Even if this technology was true (and it looks really stupid, because the pyramids with animals looked like they were made in Play Station 1 ) I could not know what they are really talking about because this technology is only for people who use it in order to catch your attention and if you are stupid, they will catch yours for sure. 
IF IT WAS INFINITE, THAT WOULD BE REAL LIFE GRAPHICS.


----------



## qubit (Dec 21, 2010)

@inferKNOX

Thanks for the reminder about the petition, which I'd forgotten about. I'd be happy to sign it, but I want to check out the full picture before doing so. My post 147 before yours, explains where I'm coming from. If this is fraud - and truly infinite detail certainly is - then there's no way that this muggins is gonna sign a petition, even if it costs nothing.


----------



## inferKNOX (Dec 21, 2010)

@qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.

@Aleksander Dishnica: Either I'm not understanding you at all, or you're saying we're stupid. I really hope it isn't the latter because that would be very narrow minded, _and_ against the rules.


----------



## Benetanegia (Dec 21, 2010)

inferKNOX said:


> @qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.
> 
> @Aleksander Dishnica: Either I'm not understanding you at all, or you're saying we're stupid. I really hope it isn't the latter because that would be very narrow minded, _and_ against the rules.



The reason that I don't want to sign is that, until proven otherwise with a real time demo, I think this is just a hoax and I'm not going to support something that s not what they say it really is. Period.

I can recognise the serious potential benefits of teleportation and time travel, but I would never believe/support someone who said it had found the way to achieve them, if they had not demostrated anything. The whole point of this is that what they say they are doing (searching the required points from an infinite/huge ammount of point cloud data in real time) is or has been imposible until now, the author himself admits that searching the required points is/was imposible, except they have managed to do it. The problem is that they have not demostrated they can do it and they have not even hinted at what method is used. Not even an overview has been made and the only thing mentioned is Google Search. The Google example is stupid, because Google works thanks to distributed computing (the search is made on thousands if not millions of PCs following a hierarchy), something that the UDT engine can't have access to. 

Until they give something tangible it's a definite no from me. The reason for this is that this not not the only engine promising this kind of advancement (it is however unique on the scale which again works against them for being unrealistic), there are hundreds of similar claims (should we pay attention to all of them?), both with point clouds or voxels and none of them have been proven viable except for procedurally generated maps or using fractals*, none of which would be usable for modern games.

If I had to support one 3rd party exotic engine I would rather choose Atomontage over this any day. They are obtaining similar results, but they are explaining everything, they have, know and telll the cons and don't rely in a "secret magic algorithm" which does what no one has ever been close to do. That is, they stay on the ground.

Besides

Intel is already researching voxel rendering.
Nvidia is already researching voxel rendering.
And of course the countless of 3rd parties that are researching similar rendering methods.
I'm sure AMD is researching something similar, so if all of them discarded UDT it's probably because of a reason. I don't see the point of pressuring them into taking another look, when there's been no advancement. Going by what they say in the video, they couldn't reach the high ranks in those companies, which means thay did contact them, but probably were discarded by someone in the know, an actual engineeer/programmer, which usually is a low-ranked person within the company. That's the end of he story for me, until they show something.

*In fact the pyramids are nothing but a cheap trick IMO. They are nothing but the same object instanced/replicated several times forming a Sierpinsky pyramid. This (sierpinsky) is very important for 2 reasons, one is that memory footprint is ridiculously small compared to havng to represent with point clouds a world akin to i.e Crysis, BC2, Metro 2033... The other one directly affects the so called search engine and it really puts into question the real efficiency of the engine, since with this fractal organization, once *one point* has been "searched" for one of the objects (rare animal) you can "automatically" know where the same point is for the rest of the 2 billion objects, by running the simple fractal algorithm. Something that would not be posible or would need orders of magnitude more computing power with an actual game world data or even just if the animals were placed randomly instead of forming a very well known fractal strcuture.


----------



## qubit (Dec 21, 2010)

inferKNOX said:


> @qubit: but the fact that we as intelligent humans can recognise that it has serious potential benefits, even if they're not the hyped "infinite" benefits, should prompt us to push for it. Refusing to accept it solely because of the makers hyping it would mean boycotting just about everything in the world today, because everything's always over-hyped by everyone.



Unfortunately, there are no benefits to be had. It's a bit like saying of a 419 advance fee scammer, "Perhaps they will give me just a _little_ bit of money if I send them some? Ok, let's do it!" Now, that wouldn't fly would it?

This outfit is touting infinities and there's your scam right there. They have no advanced graphics processing to show, just a scam to suck clueless investors in - and then disappear.

Bene's whole, detailed post puts it all very well, but the crux of it is right here:



Benetanegia said:


> The reason that I don't want to sign is that, until proven otherwise with a real time demo, I think this is just a hoax and I'm not going to support something that s not what they say it really is. Period.



If this outfit actually demonstrates some graphics advancement (they won't) then people like myself and Bene will be happy to sign that petition.

There's no giving "Just a little bit" to a con. Instead, spit it out and throw it away, just like the worthless spam that it is.

I'm sorry I don't have better news, dude, but blame the asshole who made up the scam, not your friends on TPU who want to avoid yourself and others from getting sucked in.


----------



## inferKNOX (Dec 21, 2010)

Fair enough Ben & qubit, fair enough.
I'll leave it to the maker to defend. I directed him to this thread (via email) to have a look at what you guys have to say and decide how open he wants to be with this thing. I can not dispute that it's authenticity is in question because of the very nature of the claim. Let's watch and see.
Here's hoping it hasn't been a colossal waste of effort.

EDIT: depending on his response, I will dump the petition if I see no proof myself.


----------



## Thatguy (Dec 21, 2010)

Actually the more I got thinking about this, the more the description of the object became critical for cloud point data. 

  the first thing is, how do we generate points without polygons. Well I got thinking about that. You could in thoery had a fiarly complex rendering container and use a subdividing algorythm based on a bmp to create detail. Kind of like tesselation, but it might be different. 

   for instance you could save 2 axes of data. front and side. You could easily embed detail cordinate info in both of the ax's to form 3d images. I was trying to work out how much space you'd need to create such images but a basic bmp should suffice. in fact if you did it properly you could actually make somewhat larger files based on bitmap and encode all sort of info on them if you using 64bit structures. 

   you could put the x&y cordinates in rough the first 32bits. you could embeed the high of the pixel from z in 8 bits, you could embed the color info in the next 24 free bits. 

  If you moved to a 80 bit word per pixel. you could shove alot of detail onto a bimap. 

  now processing all this would be interesting. the reason you'd need the x y and z coordinate info would be to calculate light disprsement for POV to make the 2d picture appear to be 3d. once you give each pixel a color then it will render it. If you make the word 128b then you could embed everything in the bit map.thats going to make each bit map rather large. going to 256bitmap would allow you to introduce a linked framework for morhping and animation. The key is to store as much data as possiable per point. so figuring out the storage container is crucial. its easy to read 128 data but if you have a bitmap with a point of data for each pxiel say at 1980x1020 your talking 128bytes per pixel of data.  Your talking minimum If my math is right and its really early somewhere around 300mb per image for a massively detailed image. this btw assumes a 1980x1020 3d image representing all the data possiable in a image using high detail.


----------



## qubit (Dec 21, 2010)

inferKNOX said:


> Fair enough Ben & qubit, fair enough.
> *I'll leave it to the maker to defend. I directed him to this thread (via email) to have a look at what you guys have to say and decide how open he wants to be with this thing.* I can not dispute that it's authenticity is in question because of the very nature of the claim. Let's watch and see.
> Here's hoping it hasn't been a colossal waste of effort.
> 
> EDIT: depending on his response, I will dump the petition if I see no proof myself.



You're welcome, Infer. 

Excellent - pointing him to this thread was an excellent idea! Now watch this con artist shy away...


----------



## qubit (Dec 21, 2010)

Ok, just had another look around the websites (they have two, you know.  ) and here's a few of the telltales that should set off alarm bells:

- Both web sites look extremely amateurish, like someone with no web talent knocked them up in two minutes flat. www.euclideon.com (cool-sounding name, I'll give it that) and www.unlimiteddetailtechnology.com Wot, a revolutionary graphics company can't knock-up a decent website? Really? Let's have a look at a nice boring, established one, shall we: www.nvidia.com See the difference?
- Tech has been in develelopment for "years", they are just putting the finishing touches on it now and it will be showcased/released "Real Soon Now". Yeah.
- Big talk of "investors". Nah, milking the marks for all they've got and then disappearing is what it's really all about.

And finally, this is the big one:

- Makes this impossible claim: It enables computers to display infinite geometry at real time frame rates. This is total bollocks. The fact it's "all done in software" just rubs it in. However it's done, it would require the computer to have infinite bandwidth, infinite power consumption, heck, infinite everything. An impossibility. Here, take one in the nuts guys:  Assholes.


----------



## Delta6326 (Dec 21, 2010)

video was cool but shadows need a little work. this stuff usually gets stuffed from big company's .


----------



## inferKNOX (Dec 21, 2010)

Here is an interview:
http://thisismyjoystick.com/interviews/interview-bruce-roberts-dell-unlimited-detail-technology/

Other discussions over it:
Jan 30, 2009: http://www.somedude.net/gamemonkey/forum/viewtopic.php?f=12&t=419
12/29/2009: http://www.gamedev.net/community/forums/topic.asp?topic_id=557528
2010/03/10: http://blogs.howstuffworks.com/2010...fferent-way-to-think-about-computer-graphics/
2010-03-10: http://dis.4chan.org/read/prog/1268220399
03-10-10: http://www.overclock.net/software-news/686703-yt-unlimited-detail-technology.html
11 Mar 2010: http://springrts.com/phpbb/viewtopic.php?f=10&p=419379
12-03-2010: http://community.eu.playstation.com...ail-graphics-technology-is-here/td-p/10394555
03-12-2010: http://www.neogaf.com/forum/showthread.php?t=389966
*May 22nd, 2010:* http://forums.ngemu.com/open-discussion/135350-unlimited-detail.html
1st November 2010: http://animosity-guild.net/forum/showthread.php?t=4721
... and plenty more

Wiki:
http://unlimiteddetail.wikia.com/wiki/Unlimited_Detail_Wiki


----------



## pantherx12 (Dec 21, 2010)

Aleksander Dishnica said:


> Even if this technology was true (and it looks really stupid, because the pyramids with animals looked like they were made in Play Station 1 ) I could not know what they are really talking about because this technology is only for people who use it in order to catch your attention and if you are stupid, they will catch yours for sure.
> IF IT WAS INFINITE, THAT WOULD BE REAL LIFE GRAPHICS.



They still have to "draw" if you get what I mean, can you draw or render a life like giraffe? ( even a non interactive one) because these software developers can't either.

As for ps1, is the ps1 support about a million more polygons 

Did you look at stuff like trees or the floor?




by the way @ everyone I still can't freaken beleive your taking unlimited literally.

The point is it will only render as many points as there is pixels on your screen.

NOTHING else is rendered, if you put a car infront of say a person, the part of the person obscured by the car would NOT be rendered at all.

That is how you get "unlimited detail" as the only limit is the game designers willingness to add more detail, and also HDD space.

THAT is it.

If you had 10gb spare for a game you could have 10gb worth of "detail"*

if you had a 1tb spare for a game you could have 1tb worth of "detail" *

if you had 1000 tb spare for a game you could have 1000tb worth of "detail"*

well obviously some of that would be game code n shiz not all going to be the 3d data.

That is what is unlimited about it, the only limit is the design and the storage space.




@qubit specifcally, they say a demo will be out in 12-16 months on their second website in their press release ( posted september) 
That's about normal really when a consumer demo is not top of your list of things to do. ( as it's game developers who will be most interested in this really)


----------



## remixedcat (Dec 22, 2010)

Has anyone contacted a game dev company (like epic games, bethesda softworks, EA, blizzard, gearbox, etc.....) about this yet. Please do and please post!


----------



## Over_Lord (Dec 22, 2010)

In 16 months as the guy Dell says, he and his company fellows are gonna be rich, and MS's DX11 is gonna eat Dirt 3


----------



## hellrazor (Dec 22, 2010)

Just as a thought, if they use a search engine-like system how would they work with transparency?


----------



## Mussels (Dec 22, 2010)

hellrazor said:


> Just as a thought, if they use a search engine-like system how would they work with transparency?



now that i'm more awake:


thats along the lines of what i was thinking. this new rendering method doesnt seem to have any fancy features that we're used to. no shadows and lighting, no transparency, no fancy effects.


maybe they arent implemented yet, but what if they CANT be?


----------



## inferKNOX (Dec 22, 2010)

Well I spoke to Mr Dell. His response was enough for me to feel reassured that having faith in this technology is not a waste of time. I maintain to you all that this is worth it & I humbly request your signatures on the petition.
I understand if any of you feel concerned enough not to want to act without proof as yet, but I remind you that part of the petition's purpose is to reduce the time it'll take to get that proof to us "Joe the plumbers" (lol). I've rewritten the petition a bit so that those with concerns can feel a greater sense of not voting for something more than they mean to, like funding.

@hellrazor & Mussels: if you look at this you'll see that doesn't seem so:
http://features.cgsociety.org/story_custom.php?story_id=5615&page=1
Remember that this guy isn't introducing point clouds, he's introducing a more efficient way of rendering with point clouds that makes it viable for real-time rendering.


----------



## Mussels (Dec 22, 2010)

yeah but thats saying the technology exists to do this via software, slower than realtime.


its not saying the engine these other people have, is capable of doing the same in realtime.


----------



## qubit (Dec 22, 2010)

c'mon people, has no one seen my post 157? It tells you all you need to know that this is a scam. It's a no-brainer.

Don't mess around getting sucked into petitions and stuff like that. This Dell guy has claimed "It enables computers to display infinite geometry at real time frame rates." There's no qualifier here, it means literally what it says, which is impossible. There's also the other stuff I pointed out, showing this is a hoax.

@inferknox: of course he's going to assure you to sign the petition. Why would he do otherwise? To then keep pushing for others to sign this stupid petition after I've exposed the scam is idiotic.

Nah, get him to prove it first, like everyone else with a new invention has to, not ask us to have "faith" in his new system and then he will "reward" us.


----------



## Benetanegia (Dec 22, 2010)

inferKNOX said:


> @hellrazor & Mussels: if you look at this you'll see that doesn't seem so:
> http://features.cgsociety.org/story_custom.php?story_id=5615&page=1
> Remember that this guy isn't introducing point clouds, he's introducing a more efficient way of rendering with point clouds that makes it viable for real-time rendering.



That cgsociety article does not help at all, that article and many many others of that kind, is in fact, what detracts us from even slightly believing in his claims. From the article you should understand that it takes *minutes* to render a *single frame* with that kind of stuff, probably on a *dual 8/12 core Xeon/Opteron workstation*. What Dell claims is *real-time, 24/30 fps*, that is like idk 1000 times faster, *on a single core*, he even claims mobile phones. bah! Show it or stfu.


----------



## inferKNOX (Dec 22, 2010)

@Mussels: I pointed you there to show you the capabilities of point clouds. This tech is a means of rendering that faster.

@qubit: Such a thing as infinite could never literally be meant literally and I understood it from the start to be relative to current methods. I disagree with your notion of having "exposed" something, but I do agree that you made some good points in your concern. It was not that he reassured me, it's how, ie, what was said.
There is nothing for them to gain by faking this, so I don't feel that there's risk in supporting it. If they change their tune and start talking of money, I will turn away without hesitation.
(Just check the note I put at the bottom of the petition.)


----------



## 20mmrain (Dec 22, 2010)

This is fucking awesome! I really hope that this technology is as good as they say and takes things to the next level. If so things could open up to a whole new world of 3d!!!

We could possibly finally get more with less.... think about it....using the computational power of todays GPU's with this. We could do some amazing things!!!


----------



## inferKNOX (Dec 22, 2010)

20mmrain said:


> This is fucking awesome! I really hope that this technology is as good as they say and takes things to the next level. If so things could open up to a whole new world of 3d!!!
> 
> We could possibly finally get more with less.... think about it....using the computational power of todays GPU's with this. We could do some amazing things!!!



If you're interested in it, please be sure to sign our petition. Just check my sig for it.


----------



## inferKNOX (Dec 22, 2010)

Here are some pics I found:


> I call this a Jungle Puppy, because I'm not very good at naming things. You can see the high level of detail on the legs in the second picture, it's all real point cloud geometry, running in unlimited amounts in real time, and it is a software system.


----------



## crazyeyesreaper (Dec 22, 2010)

the main problem here is everything you dig up is at 320x240 or 640x480 no real HD resolution to speak of  Unlimited detail at old SNES resolutions maybe  as far as what we can see from there images and demos


----------



## inferKNOX (Dec 22, 2010)

Some interesting comments I've come across:


> > A mix of this and polygons would be the best solution, use this method for background items that don't need hit detection and whatnot, and polygons for the stuff that does.
> 
> 
> As it happens when the guy behind this posted on B3D about the tech. that's precisely the sort of early implementations he said he was pushing for. Having real 3D backgrounds rather than 2D skyboxes could be a cool use. A game like FF13 has a crapload of static backgrounds, it would sure look nice if they were all 3D rather than flat bitmaps.


source



> > I think one of the main problems mentioned is the speed, and the fact it is Single Threaded pure C code (no multi threading or intrinsics)
> >
> > For some fast optimisation he could try and optimise it with OpenMP or something similar, it should help speed up several loops by balancing the load over several cores (doesn't work with all loops, but it only requires a #pragma and a tickbox check in visual studio If I remember correctly)
> >
> ...


source

Lol, in all reality, this whole thing is probably fluff. I'm just hoping for the off chance that it isn't!
EDIT: At least I'm learning a tonne about CG thanks to looking deeper into it. Speaking of which, many people are talking about ID Tech 6.


----------



## Benetanegia (Dec 22, 2010)

inferKNOX said:


> Lol, in all reality, this whole thing is probably fluff. I'm just hoping for the off chance that it isn't!
> EDIT: At least I'm learning a tonne about CG thanks to looking deeper into it. Speaking of which, many people are talking about ID Tech 6.



Why do you think we are so skeptic? I already mentioned that Carmack is working on something like this for id Tech 6, and he considered to include some related stuff like data compression algorithms in id Tech 5. I think MegaTexture has something to do with it. Here from the wiki:



> Id has presented a more advanced technique that builds upon the MegaTexture idea and virtualizes both the geometry and the textures to obtain unique geometry down to the equivalent of the texel: the Sparse Voxel Octree (SVO). Potentially id Tech 6 could utilize this technique. It works by raycasting the geometry represented by voxels (instead of triangles) stored in an octree. The goal being to be able to stream parts of the octree into video memory, going further down along the tree for nearby objects to give them more details, and to use higher level, larger voxels for further objects, which give an automatic level of detail (LOD) system for both geometry and textures at the same time. The geometric detail that can be obtained using this method is nearly infinite, which removes the need for faking 3-dimensional details with techniques such as normal mapping. Despite that most Voxel rendering tests use very large amounts of memory (up to several Gb), Jon Olick of id Software claimed it's able to compress such SVO to 1.15 bits per voxel of position data.



However that's for id Tech 6 which comes after id Tech 5 which has not been released yet and will be first used on the game Rage to be released in late 2011. After this comes Doom4 using id Tech 5 too. Then id Tech 6. To make an idea of engine cycles here's a list of when the previous engines debuted, from the top of my head:

id tech 1 - 1996 - Quake
id tech 2 -1997 - Quake 2
id tech 3 - 1999 - Quake 3
id tech 4 - 2004 - Doom 3
id tech 5 - 2011 - Rage

id tech 6 would release after 2015. From the id Tech 6 wiki article:



> Preliminary information given by John Carmack about this engine, which is still in early phases of development, tend to show that id Software is looking toward a direction where ray tracing and classic raster graphics would be mixed.[1] However, he also explained during QuakeCon 08 that the *hardware capable of id Tech 6 does not yet exist*.



Compare that claim coming from the greatest expert in graphics engines to the claim made by Mr. Dell... it just doesn't make sense. Like I said and as it's explained in the quotes above, this kind of representation of worlds requires huge ammounts of GB and although you can compress them a lot, the hardware capable of decompressing it, streaming it and calculating it on the fly does not exist yet. It's either:

a) huge ammount of GB (= huge bandwidth required), lower CPU requirement
b) high data compression, huge CPU power and memory bandwidth required


----------



## GSquadron (Dec 22, 2010)

inferKNOX said:


> @Aleksander Dishnica: Either I'm not understanding you at all, or you're saying we're stupid. I really hope it isn't the latter because that would be very narrow minded, _and_ against the rules.



No man! I didn't even read what you wrote, because there were a lot of replies, i am just saying that the technology is stupid because it cannot be done.


----------



## qubit (Dec 23, 2010)

inferKNOX said:


> Fair enough Ben & qubit, fair enough.
> *I'll leave it to the maker to defend. I directed him to this thread (via email) to have a look at what you guys have to say and decide how open he wants to be with this thing.* I can not dispute that it's authenticity is in question because of the very nature of the claim. Let's watch and see.
> Here's hoping it hasn't been a colossal waste of effort.
> 
> EDIT: depending on his response, I will dump the petition if I see no proof myself.



Two days later and the silence from our Mr Dell is deafening, isn't it? 

With annoying skeptics like myself and Benetanegia around and calling him out in no uncertain terms, you'd think he'd be dead keen to prove us all wrong, wouldn't you? In fact, any honest person with a fantastic new invention would be profoundly irritated by us and feel compelled to do so. But not our Mr Dell the charlatan. No, of course not. 

Oh yeah, someone pointed out on here that there had been no new developments for two years. _Two whole years?_  Shit, that's a fucking century in computer graphics!

Remember, elusiveness in backing up claims is a primary hallmark of the conman. It's always frustratingly just around the fucking corner, isn't it?

Finally, that name "Dell" is a bit suspicious don't you think? The same name as that of the successful computer company? Sure, there are other people in this world with that name, but it's not a very common name and the coincidence is a little too convenient methinks. Pehaps it helps his cause to extract money from marks, sorry, "investors" if they confuse him a little bit with the other big, successful Dell, no?

c'mon, how much proof do you need?!


----------



## Mussels (Dec 23, 2010)

If he is reading this: official reps are welcome here, so long as you dont do any advertising. Staff will make sure you dont get unfairly attacked, and have every chance to explain your product(s) in detail.


----------



## Thatguy (Dec 23, 2010)

qubit said:


> Two days later and the silence from our Mr Dell is deafening, isn't it?
> 
> With annoying skeptics like myself and Benetanegia around and calling him out in no uncertain terms, you'd think he'd be dead keen to prove us all wrong, wouldn't you? In fact, any honest person with a fantastic new invention would be profoundly irritated by us and feel compelled to do so. But not our Mr Dell the charlatan. No, of course not.
> 
> ...



   Honestly, I have absolutely no idea who you are, in fact I'd never seen your screen name before today. 

   I am not seeing your relevance to the technology being discussed ? 

   Who are you again ?




















  BTW thats why your not getting a response.


----------



## crazyeyesreaper (Dec 23, 2010)

to be honest Thatguy qubit has made more decent contributions to the forum in general then ive seen you give so i could use the same asshole logic as well

point is this tech isnt gonna go anywhere hell awesome u can render a 320x240 image with a crapload of geometry awesome thats fantastic its also super nes resolution so im hardly impressed.

I need actually demos that show what there trying to do in realtime show me video footage of something working at 1280x720 at the very least because on a 4ghz quad core with a scene around 2million polygons the kind of rendering method Mr Dell is some how optimizing to run in real time takes my rig 9-15mins to render per frame. So i find it to be bollocks till we get something... Hell even onlive with there game service crap had demos showing what it was capable of back when they first started push there ideas. Seems UDT has some low res shots of a creature. It dosent look good it dosent even remotely seem like its UDT. It looks to be nothing more then a fairly low resolution render or still frame that proves nothing of there technology i could render something and claim i did it with UDT dosent mean i did.


----------



## qubit (Dec 23, 2010)

Thatguy said:


> Honestly, I have absolutely no idea who you are, in fact I'd never seen your screen name before today.
> 
> I am not seeing your relevance to the technology being discussed ?
> 
> ...



WTF? What's so hard to understand? Try reading my previous posts on this thread, too. :shadedshu

There is no "technology" here, only a con.



crazyeyesreaper said:


> to be honest Thatguy qubit has made more decent contributions to the forum in general then ive seen you give so i could use the same asshole logic as well



Thanks crazy. I've just seen your post. 

You make some very good points in the rest of your post as well. Yeah, let's see a high res animation in "infinite" detail. I'd _really_ like to see that...

Honestly, the more one looks at this UD thing, the more shot through full of holes it becomes. It's an unfortunate fact of life that conmen will move into every walk of life, even a high tech one like computer technology, where you'd think people would be more clued up not to fall for rubbish like this.


----------



## Mussels (Dec 23, 2010)

Thatguy said:


> Honestly, I have absolutely no idea who you are, in fact I'd never seen your screen name before today.
> 
> I am not seeing your relevance to the technology being discussed ?
> 
> ...



Qubit has been on this forum for spot on 3 years now, so actually... his name is pretty well known around here.

As opposed to you, who has only been here a little over a month... of course you havent seen his name. you've barely had time to scratch the surface of the forums.


----------



## newconroer (Dec 23, 2010)

Mussels said:


> Qubit has been on this forum for spot on 3 years now, so actually... his name is pretty well known around here.
> 
> As opposed to you, who has only been here a little over a month... of course you havent seen his name. you've barely had time to scratch the surface of the forums.



Excuse you?

Please stop making perfect examples of why upper members of TPU have gotten way out of line in the past five years.
Your post count dictates nothing, nor does the color of your name.



As for Unlimited Point Cloud Data and UDL...I am with Benetanegia for the most part.

Additionally, I don't completely disbelieve it, nor do I think it's impossible, NOR do I think that there's some big conspiracy by the Top Guns, to keep a monopoly on the graphics industry.

I welcomed the concept, but without any consistent representative body, it makes it difficult to find it realistic. Though I could say the same thing about real time vector drawing and real time ray tracing in full 3d environments for the home user...  ATi talks about it, but really, nothing's happening.

It becomes even more confusing and inconsistent, when one considers that what software developers have in mind, is limited by what hardware designers have in mind. Since UDL suggests it's capable of running under software power, then that point is moot - but I understand why that makes it seem so much more unlikely.

Regardless, some of the dissent in this thread has gotten out of hand, and is very reminiscent of the unnecessary 'fan boy-ism' that permeates throughout certain areas of TPU, like the Graphics sub-forum or the Games forum, when the moderators are too busy padding their clique, that they can't recognize when someone puts up another Infinity Ward versus Activision(or Treyarch) flame thread.

Really, agree or disagree, but grow up.


----------



## qubit (Dec 23, 2010)

newconroer said:


> Excuse you?
> 
> Please stop making perfect examples of why upper members of TPU have gotten way out of line in the past five years.
> Your post count dictates nothing, nor does the color of your name.



Hey buddy, did you even bother to read my posts before casting aspersions my way? You're also making some pretty nasty and unfounded accusations about the mods here. :shadedshu

Claims of "infinite" performance - especially without qualifiers - definitely makes for a scam. Along with all the other stuff that's wrong with this, it doesn't take much to see this for what it is.

Just read all my posts with an open mind and you'll hopefully see where I'm coming from. 

In fact, read Benetanegia's and crazy's too. You'll see I'm not the only one that doesn't buy into this.


----------



## pantherx12 (Dec 23, 2010)

Qubit man, for it to be a con someone has to be conned.

Who is getting conned here? They state they have "plenty of money" on their damn website lol

So they're not trying to get money. Thus, not a con.



Basically to be a scam, or a con, someone has to loose out, are you loosing out? Are any companies you know loosing out? Have you heard about anyone loosing out?

If the answer is no, you can't call it a scam or con.

Maybe LIES and HAX but not a con.




qubit said:


> Hey buddy, did you even bother to read my posts before casting aspersions my way? You're also making some pretty nasty and unfounded accusations about the mods here.
> .



By the way, that's his opinion and I fully support his right to have that opinion, I find a few members and mods on here to be dickish ( won't throw names around as I'm not 5) but what's wrong with that?

Never met a policeman who's a bellend? lol


----------



## qubit (Dec 23, 2010)

pantherx12 said:


> Qubit man, for it to be a con someone has to be conned.
> 
> Who is getting conned here? They state they have "plenty of money" on their damn website lol
> 
> So they're not trying to get money. Thus, not a con.



They might well have "plenty of money" from those "investors" they've duped. As I just said to newconroer, have another look at my posts and the others to see where we're coming from. I detest scammers, who are nothing more than fraudulent criminals and that's why I have the tone I do about this. I'm _not_ having a go at you guys. 

EDIT: I see you've edited your post while I was writing mine. Such is life. <sigh> It doesn't really change my message, however.


----------



## BumbleBee (Dec 23, 2010)

anyways. BACK IN CANADA...


----------



## pantherx12 (Dec 23, 2010)

Google for reports of them scamming someone 

After all they've been "scammers" for a few years now, where's all the reports and blog posts about the people that got ripped off?

.......... ..... yup, I'm finding them hard to find also.

They've also had the same name during all that time .... even their new site doesn't diassociate them from their unlimited detail brand ............... yet still no reports? 

Either no one got scammed or the people that got scammed thought " Oh well, fair enough! let's not tell anyone so more people get scammed!"


Just think about it for a moment.


As I said feel free to call BS but if you want to excuse people of being criminals back it up eh dude.lol It's just not very nice


----------



## Benetanegia (Dec 23, 2010)

pantherx12 said:


> Qubit man, for it to be a con someone has to be conned.
> 
> Who is getting conned here? They state they have "plenty of money" on their damn website lol
> 
> ...



The fact that they say they have plenty of money is one of the posible hints at a scam. What's the very first thing that buskers (or beggars) do? Place some money of their own on the basket. I'm not comparing street musicians or poor people with scammers, it's just that the concept of money in the basket is the same. People just feel more compfortable giving out their money if somebody else did it before. A simple psychological trick.

Scammers always have all the money or almost all the money they need. It's just such a tiny ammount of your money that they need or that they could use. In fact, they don't really need it, but they are doing you a favor by letting you form part of such a good investment.

In that regards this looks a lot like this, it's just that I'm not as convinced as qubit (we could say that a part of me still wants to believe). But the signs are cetainly there. Maybe it turns out that the signs were a false alarm, but as of right now they are there. If it is a scam, everything they have posted would be the bait and we are not the prey, we are far too small, AMD is not the prey, they are far too big, but both us and AMD could form part of the bait and I simply don't want to. "Hey AMD seriously considered our tech, but because of... circunstances...","Our tech is praised all over the net...". NO, thanks.



pantherx12 said:


> Google for reports of them scamming someone
> 
> After all they've been "scammers" for a few years now, where's all the reports and blog posts about the people that got ripped off?
> 
> ...



Such is the internet. A fantastic place for scammers. Hypothetically speaking: 1 week of work and the bait is there, forever, awaiting the correct prey*. I'm not saying that it is a scamm, just how it would work. And some signs are there like I said.

*Just think about the scale of his claims. He is talking about bringing down companies like Nvidia or AMD with this technology. The "tiny" ammount of money that I talked about is obviously going to be huge for normal people like us, we are not the objective. But there's many people with lots of money and few brains, wanting to make some big bucks quickly that could byte, eventually, they just have to wait, like a fisherman would do.


----------



## pantherx12 (Dec 23, 2010)

XD 

Funny about the buskers, in my town they do it the other way round, keep their hat empty so people think "awwww"  and give them money.

Works as well!

I suppose it's fair enough for people to beleive what they want, I just feel it's a little harsh to call people criminals with no proof.

I'd be offended if someone called me a con artist that's for sure, and if they done it without a reason I'd probably be annoyed.


----------



## entropy13 (Dec 23, 2010)

What kind of beggars are you talking about Benetanegia? Might not be beggars at all, considering they have a basket in the first place.


----------



## Over_Lord (Dec 23, 2010)

we are going offtopic


----------



## pantherx12 (Dec 23, 2010)

thunderising said:


> we are going offtopic



Now we're talking about unlimiteddetail the company.


----------



## Mussels (Dec 23, 2010)

i gave money to a hot busker chick earlier today playing a violin.

why? she proved she could deliver the goods (and she WAS good, thought it was a recording at first)


----------



## Benetanegia (Dec 23, 2010)

entropy13 said:


> What kind of beggars are you talking about Benetanegia? Might not be beggars at all, considering they have a basket in the first place.



A punnet? English is not my first language.

I'm talking about the ones that ask for money on the streets. They are usually just lying on the floor all day long with a small basket and a carton where they ask for money. Some are true beggars others are not indeed. lol. 

I saw a documentary about an anonymous one in Madrid, who was pretty much a scammer (depending on how loose your morals, in the end he just asked for money and people just gave it to him). He lived in a huge appartment in the most expensive street in Madrid. 

Every morning, wearing an expensive suit and a briefcase, he went to a train station where he had rented a locker, inside he had a bag with a beggar costume, so he would go to the services and put them on. In the briefcase he had a basket and the message. After putting everything else on the locker, then he'd just go to the busiest street in Madrid, and lay down in some misery inspiring pose for a period of time, a workday, with breaks for luch and everything. He claimed to obtain 100-500 euros a day , depending on how lucky he had been, or how well he had represented his role. 

It was not as simple as that actually, he talked about many variables, like how the economy was going on, important incidents that could change peoples perception or feelings (i.e. death of a celebrity, an earthquake) and a countless of other things he used to take into account in order to perform his role in the most efficient way posible. It was impressive actually.

Sorry because it was off-topic, but I just wanted to show how elaborate and patient scammers can be.


----------



## qubit (Dec 23, 2010)

Mussels said:


> i gave money to a *hot* busker chick earlier today playing a violin.
> 
> why? she proved she could deliver the goods (and she WAS good, thought it was a recording at first)



Nah, it's cuz she wuz hot, obviously.


----------



## remixedcat (Dec 23, 2010)

any luck contacting any of the companies???


----------



## Thatguy (Dec 23, 2010)

crazyeyesreaper said:


> to be honest Thatguy qubit has made more decent contributions to the forum in general then ive seen you give so i could use the same asshole logic as well.



   Rightfully so,and none of us peons on these here messages boards really mean anything to a large company or a highly focused developer. We aren't individually as important as we'd often like to belive ourselves to be. 




crazyeyesreaper said:


> point is this tech isnt gonna go anywhere hell awesome u can render a 320x240 image with a crapload of geometry awesome thats fantastic its also super nes resolution so im hardly impressed.



  well I talked about a method for creating full 1980x102 images in a storage format that would net a 300mb file for each image and how it was easily feasable. The issue becomes scalling and I don't disagree with you on this tennet of your argument. 




crazyeyesreaper said:


> I need actually demos that show what there trying to do in realtime show me video footage of something working at 1280x720 at the very least because on a 4ghz quad core with a scene around 2million polygons the kind of rendering method Mr Dell is some how optimizing to run in real time takes my rig 9-15mins to render per frame. So i find it to be bollocks till we get something... Hell even onlive with there game service crap had demos showing what it was capable of back when they first started push there ideas. Seems UDT has some low res shots of a creature. It dosent look good it dosent even remotely seem like its UDT. It looks to be nothing more then a fairly low resolution render or still frame that proves nothing of there technology i could render something and claim i did it with UDT dosent mean i did.




   I myself would like some demos, but we all must admit that there are often betters ways to skin a cat. they have been headed in one direction for a long time based on older hardware like 500mb harddrives that were available at the dawn of PC gaming when DOS was a big deal. 

  So we do have alot of legacy thinking. 

   If you look below the surface of what Mr Dell is indicating " at least what I think he is indicating" you really don't need to render polygons if you have stored the information on the image. if each pixel on the screen has values, there is no real need for calculations to determine where they might be, all the image data "could be stored" in the image itself.This woudl dramaticlly lower the render time. the issue become reduction and increasing detail based on camera zoom, lighting models etc. 

   So its not exactly free, but it would reduce a great deal of overhead. It would increase and maybe decrease the work the developer has to do in several ways. 

   I was thinking about this the past few days. So procedurally, he might have something, but its not going to be the holy grail.


----------



## Thatguy (Dec 23, 2010)

Mussels said:


> Qubit has been on this forum for spot on 3 years now, so actually... his name is pretty well known around here.
> 
> As opposed to you, who has only been here a little over a month... of course you havent seen his name. you've barely had time to scratch the surface of the forums.



   Yes becuase hes as famous as Bill Gates or Steve Jobs. Being famous on a forum does not grant one credability elsewhere, nor does it guarentee name recognition.


----------



## qubit (Dec 23, 2010)

Thatguy said:


> Yes becuase hes as famous as Bill Gates or Steve Jobs. Being famous on a forum does not grant one credability elsewhere, nor does it guarentee name recognition.



You miss the point again, my friend. You made a big deal about who am I on this forum. Well, Mussels answered you. Quite nicely. And he's a mod...

I don't suppose you bothered to read my previous posts in this thread on this outfit like I suggested, did you? If you read them and try thinking _critically and clearly_ for a moment, you'll see that what I'm saying is right. I'm not the only one saying it, either. Remember, it's not me or others like me that are your enemy, it's the bloody conman we're outing! This is a classic case of shooting the messenger. 

Once again, out of all the many telltale signs, the smoking gun is the claim is to render *infinite detail* in real time. Absolutely impossible and total bollocks, as anyone with a little logical thinking can see. Remember, there was no qualifier with it, so it's not up to us to come up with what we think he said. Dell quite literally made an impossible claim - and it's sat unchanged on his website for ages. Let him fall by that.

Never mind, people get sucked in all the time and are usually in denial when it's pointed out to them, so this is nothing new. It won't be me or my skeptical friends, that's for sure.


----------



## Thatguy (Dec 23, 2010)

you missed my point, just becuase your a big deal on a web forum, doesn't mean a damn thing to anyone anywhere else. Your just a big thing on THIS web forum. Big whoopdee do da. 

   I am of no consequence either. 

   So basically, everyone needs to get over themselves and realize that someone who has dedicated enough time to design a brand new piece of technology "regardless of the merits" is likely to focused to care what some forum hacks have to say about it. 

   Follow me now ? 


  As to your points about rendering, I didn't quiet read that claim the same way, He isn;t saying you can render unlimited pixels, thats just silly. He is saying you can store unlimited amounts of data " which is BS to". The basic premise is that you store massively detailed models and use a search engine "likely based on camera position" to choose which pixels to display. 

   The whole marketing slant is piss poor and invokes responses like yours. 

   I already detailed how you could render "display" cloud point data with low overhead, now doing that in a game scenario is a whole different ball game.  

  this is all relatively speaking and I outlined a 3d bitmap format to allow for these types of constructs where a 1980x1020 image would be 300mb, a bit big to be impractical, that said with newer drive tech comming its entirely posiable. 

  There are alot of issues with the idea but any new idea is worth investigating. Sometimes there are better ways to skin a cat. 

   I am pretty much done with this discussion. 




qubit said:


> You miss the point again, my friend. You made a big deal about who am I on this forum. Well, Mussels answered you. Quite nicely. And he's a mod...
> 
> I don't suppose you bothered to read my previous posts in this thread on this outfit like I suggested, did you? If you read them and try thinking _critically and clearly_ for a moment, you'll see that what I'm saying is right. I'm not the only one saying it, either. Remember, it's not me or others like me that are your enemy, it's the bloody conman we're outing! This is a classic case of shooting the messenger.
> 
> ...


----------



## pantherx12 (Dec 23, 2010)

In regards to storage, don't forget you can get graphics like this from 96kb of data http://www.gamerevolution.com/download/pc/krieger


Certain things could be rendered at the start during load up and simply be deleted when the game stops so storage problems may not be an issue.



I understood what thatguy meant straight away by his "who are you" comment, he was talking from Dell's point of view.

So don't be upset chaps ^_^


----------



## Benetanegia (Dec 23, 2010)

pantherx12 said:


> In regards to storage, don't forget you can get graphics like this from 96kb of data http://www.gamerevolution.com/download/pc/krieger
> 
> 
> Certain things could be rendered at the start during load up and simply be deleted when the game stops so storage problems may not be an issue.
> ...



Procedurally generated worlds don't work for games today. Good luck trying to mimic a Crysis or BC2 map with procedurally created data and fractals. lol

For a real example of what we are talking about read this:

http://artis.imag.fr/Publications/2009/CNLE09/CNLE09.pdf

and watch this:

http://www.youtube.com/watch?v=HScYuRhgEJw&feature=player_embedded

In this video the only example that is using actual data instead of procedurally creating it or instancing using a fractal is the last example, the human body.

Highlights of that example:

Running on a GTX280.
512x512 resolution.
10-20 fps
dataset > 32 GB (and it's just a human body)


----------



## Phxprovost (Dec 23, 2010)

You guys are still rabbling about this?  Personally I think this tech is nothing but a future patent troll.  Think about it, create an algorithm that would be required in a system like this, a system that just happens to be the next step in the logical path of progression for graphics.  Create a website and a few YouTube videos with high view count for proof of concept and sit back and wait for the industry to work its way to your product, wait a few years then sue everyone…..Or at least that’s how I currently view this.


----------



## pantherx12 (Dec 23, 2010)

Benetanegia said:


> Procedurally generated worlds don't work for games today. Good luck trying to mimic a Crysis or BC2 map with procedurally created data and fractals. lol
> 
> For a real example of what we are talking about read this:
> 
> ...



 I wouldn't expect an entire world to be procedrally generated, just all the basic stuff for the point data to be built around, a skeleton if you will. 
Just to save some space.


----------



## Benetanegia (Dec 23, 2010)

After reading every article and post forum that appears when typing Unlimited Detail on google, I have to say that UDT is nothing else but just another sparse voxel octree implementation, confirmed, but instead of the typical ray-casting method for finding the voxel, it uses that search or sorting algorithm. I hardly doubt that represents any incredible performance improvement over existing/proven SVO engines, considering that all the others are not constrained by computing power, but by memory limitations. I said it in one of my first posts, but I'd rather take and promote Atomontage, and it's far from being ready for prime time, but is far more clear about it's strenghts and weakness:

http://www.youtube.com/watch?v=4AYBm-9cBqs&feature=related
http://www.youtube.com/watch?v=tnboAnQjMKE&fmt=18



pantherx12 said:


> I wouldn't expect an entire world to be procedrally generated, just all the basic stuff for the point data to be built around, a skeleton if you will.
> Just to save some space.



I'm not sure I follow you. 

The idea is to have higher detail + more control over the details than with polygon+tesselation+displacement, at least if you are going to put the graphics world upside down. In order to achieve that every voxel has to be unique and represent the world as the artist wanted it to be. If you are talking about creating the point/voxel data out of a polygonal mess+procedural modifiers, forget about it it would be extremely slow (not posible with current hardware) and the resulting data would be huge nonetheless. At some point, on the HDD or main memory or cache the models have to be deconpressed and they just wouldn't fit in current hardware. You can't even stream it.

The best compression ratio for sparse voxel octrees data structures have been achieved (claimed) by Jon Olick from id and that means an average of 1.5 bits per voxel iirc, color included, quite impressive if you think about it, but... Now look at this video of a rabbit created out of voxels:

http://www.youtube.com/user/Akvaknarre

As you can see it's not extremely detailed, but it's made out of a 512x512x360 voxel grid. That's 94.5 million voxels, so with Olick's compression that's 140 million bits ~= 17 MB, just for the geometric data of a small animal that doesn't even look very well. You would most probably want 1024x1024x700 grid (136 MB) and to make it really worth it over poly+tesselation+displacement you'd probably need 2048x (1080 MB). For a rabbit. same rabbit will look 1000 times better with a 100 KB mesh and a 2 MB displacement texture.

For a good worldmap, I'd say the maximum relative size of a voxel would need to be like 1 mm in real world (imagine a world made out of 1mm sized small boxes) so just imagine what the voxel number would be for a 10mx10mx3m room, 3x10^12 = 3000000000000 voxels. That's 520 GB worth of data, compressed.


----------



## pantherx12 (Dec 23, 2010)

I can't really explain what I mean other than how I said it.

It's a start point to render around is all, like how it's quicker for a human to draw something if they already have a basic shape already there, a few lines here and there so you know where the eyes should be and such like.

Honestly I can't explain it better than that with text sorry : [


----------



## qubit (Dec 24, 2010)

Phxprovost said:


> You guys are still rabbling about this?  Personally I think this tech is nothing but a future patent troll.  Think about it, create an algorithm that would be required in a system like this, a system that just happens to be the next step in the logical path of progression for graphics.  Create a website and a few YouTube videos with high view count for proof of concept and sit back and wait for the industry to work its way to your product, wait a few years then sue everyone…..Or at least that’s how I currently view this.




Good point. You never know, that could be part of the game plan. Time will tell.


----------



## inferKNOX (Dec 24, 2010)

I think you guys (especially Mussels) aught to try coax Mr. Dell over to the forum for yourselves and see if you have success (and let us know what he says). After all, I can't keep hounding the man. Here's his official email: info@euclideon.com
Let's just not turn on each other because of our distinct viewpoints.
Most other sites are filled with more or less the same questions about UD as are being asked here, so I'm not sure he'll come, but it's worth a try and I too would like some public answers.

Besides that, I found some demonstration that I can't look at unfortunately because of being under an ultra-low data cap. Others were impressed by it, so have a look and post some screenies for me if you can please: http://www.mediafire.com/file/0k2rm0koqjx/compare.wmv



Benetanegia said:


> Why do you think we are so skeptic? I already mentioned that Carmack is working on something like this for id Tech 6, and he considered to include some related stuff like data compression algorithms in id Tech 5. I think MegaTexture has something to do with it. Here from the wiki:
> 
> 
> 
> ...



There's so much of what you said that I don't understand, it's not even funny, lol!  Does every hobbyist techie know this sorta stuff or are you guys CG pros or something?! I think I spend a fair amount of time on tech stuff, my wife would argue it's too much actually, but I'm baffled by how many highly technical concepts are discussed like they're everyday things by you guys!


----------



## qubit (Dec 24, 2010)

inferKNOX said:


> I think you guys (especially Mussels) aught to try coax Mr. Dell over to the forum for yourselves and see if you have success (and let us know what he says). After all, I can't keep hounding the man. Here's his official email: info@euclideon.com
> Let's just not turn on each other because of our distinct viewpoints.
> Most other sites are filled with more or less the same questions about UD as are being asked here, so I'm not sure he'll come, but it's worth a try and I too would like some public answers.



Us skeptics would like nothing better than to see Mr Dell on here, demonstrating his new graphics tech. And I for one would be very nice to him, if he showed an honest demo. However, I explained previously why he won't come. And since you invited him over a few days ago we've heard nothing, haven't we? Says it all, unfortunately.

Also, I can tell you that I'm not turning on anyone here, as you can see. I hate conmen and I'm trying to help people avoid falling into the trap as much as possible. Unfortunately, people are often in denial and shoot the messengers, one of which is me. I've worked quite hard to avoid a flame war here when I have been attacked and I'm glad to have succeeded. 



inferKNOX said:


> Besides that, I found some demonstration that I can't look at unfortunately because of being under an ultra-low data cap. Others were impressed by it, so have a look and post some screenies for me if you can please: http://www.mediafire.com/file/0k2rm0koqjx/compare.wmv
> 
> 
> 
> There's so much of what you said that I don't understand, it's not even funny, lol!  Does every hobbyist techie know this sorta stuff or are you guys CG pros or something?! I think I spend a fair amount of time on tech stuff, my wife would argue it's too much actually, but I'm baffled by how many highly technical concepts are discussed like they're everyday things by you guys!



I've looked at some of that video and I've noticed the following:

- The examples do indeed look very detailed and impressive, with no obvious polygons or stuck-on graphics when zoomed up close. Mr Dell was demoing a forest scene
- I got the impression that it's the same thing we're currently seeing with DX11 tesselation
- The resolution was probably higher than the big thumbnails on the website, but the video resolution doesn't allow me to say accurately
- The framerate must have been around 5-10fps - not quite realtime, like he claims. Ok for a prerelease demo, I guess
- The regular game graphics he compared it to ran much more smoothly
- He used the words "unlimited detail" a lot, without really specifying how much detail is actually present
- _To make a fair assessment of what is actually being demonstrated, someone who is much more expert than me, needs to take a close look at it. And by that, I mean look at his computer hardware and the software running on it_
- As my attackers (not you, infer!) can now see, I am actually objective about this and I don't shout "Fraud!" for nothing. Here's something that looks like it's got legs, so I'm saying ok, let's take a closer look. Can't be fairer than that

---------------------

Oh yeah, I know all this stuff and more, dude. I tell ya, I'm an expert in _everything!_ 

Nah, seriously, that's not possible, so don't be hard on yourself. If you look at the whole subject of "computers", it's absolutely vast and no-one can be knowledgeable in all of it. Take networking for example. The whole subject would probably fill something like several volumes a 1000 pages long. Then you've got a zillion other computer subjects and some of them such as programming, are even bigger.

My specialty is troubleshooting and general help, especially for hardware faults and making things work together when they're being difficult about it. I've got a natural gift for sussing tech stuff out, which has been evident since I was three. So it's likely that you can throw any technical problem at me and I can fix it. If it's an area that I don't know much about, say Artificial Intelligence, then it would take me a long time to read up on it and ponder the problem, but I'd get there in the end. Whether I'd actually want to do all that hard work is another matter. 

Incidentally, I use this ability all day every day in my IT support job and I'm really good at it.  Makes the work very satisfying.


----------



## crazyeyesreaper (Dec 24, 2010)

I know im no CG pro but i did waste $85,000 learning about Computer Graphics and original went for my Bachelors degree in that field but didnt make it lol. That said ive a great deal of time spent and knowledge invested in how polygons nurbs subd surfaces render methods etc to get the best image i can with what i have at my disposal. Eitherway i guess as you said supposedly in another 2 years or so well have an answer i say lock the thread for 16 months or so then come back to it then roflol


----------



## inferKNOX (Dec 24, 2010)

qubit said:


> Nah, seriously, that's not possible, so don't be hard on yourself. If you look at the whole subject of "computers", it's absolutely vast and no-one can be knowledgeable in all of it. Take networking for example. The whole subject would probably fill something like several volumes a 1000 pages long. Then you've got a zillion other computer subjects and some of them such as programming, are even bigger.
> 
> My specialty is troubleshooting and general help, especially for hardware faults and making things work together when they're being difficult about it. I've got a natural gift for sussing tech stuff out, which has been evident since I was three. So it's likely that you can throw any technical problem at me and I can fix it. If it's an area that I don't know much about, say Artificial Intelligence, then it would take me a long time to read up on it and ponder the problem, but I'd get there in the end. Whether I'd actually want to do all that hard work is another matter.
> 
> Incidentally, I use this ability all day every day in my IT support job and I'm really good at it.  Makes the work very satisfying.



I'm exactly the same on all counts including my job and MacGyver-like prowess with fixing things. Let's just say I'm considered the all-round handy-man by just about everyone I know.

What I mean though is, as you said, any one topic is quite huge. I'm diving deeper into networking ATM, but I mean that is a whole subject of it's own, as is graphics tech, design, audio tech, etc. I found long ago that you can only go so deep into any one subject without specialising because of time constraints, but I keep coming across people that seem to have delved so deep into some subjects that it makes me wonder how they afford the time!

EDIT: by the way, he did answer my email right away. He said (among other things) that he doesn't want to get into talking about it too soon again, otherwise it'll seem like a tech that is never being released, because they're quite some way from completion. I understand, but don't quite agree with that approach, that's why I'm saying you guys aught to call him too.


----------



## douglatins (Dec 24, 2010)

I just found this! i am amazed, but would one kind soul sum um this thread to me!? Hehe
Thanks!


----------



## inferKNOX (Dec 24, 2010)

Plenty arguement back and forth about whether this is a scam/hoax or not; and a petition that I started to try and get AMD to work with these guys to further the tech (check my sig).

EDIT: love the avie douglatins! ROTFL!


----------



## remixedcat (Dec 26, 2010)

stil...... any luck contacting any companies about this? I've had none.


----------



## qubit (Dec 29, 2010)

inferKNOX said:


> I'm exactly the same on all counts including my job and MacGyver-like prowess with fixing things. Let's just say I'm considered the all-round handy-man by just about everyone I know.
> 
> What I mean though is, as you said, any one topic is quite huge. I'm diving deeper into networking ATM, but I mean that is a whole subject of it's own, as is graphics tech, design, audio tech, etc. I found long ago that you can only go so deep into any one subject without specialising because of time constraints, but I keep coming across people that seem to have delved so deep into some subjects that it makes me wonder how they afford the time!



Good luck with the networking. It's a subject that interests me too. 



inferKNOX said:


> EDIT: by the way, he did answer my email right away. He said (among other things) that he doesn't want to get into talking about it too soon again, otherwise it'll seem like a tech that is never being released, because they're quite some way from completion. I understand, but don't quite agree with that approach, that's why I'm saying you guys aught to call him too.



Over two years, still not quite ready and major players like AMD & nvidia aren't jumping on them about graphics tech giving a 1000-fold improvement? Really? Yeah, let's wait for him to show us the proof.

I'm sorry, but I must keep the skeptic hat on.


----------



## hellrazor (Aug 1, 2011)

Caught it while browsing reddit.


----------



## cheesy999 (Aug 1, 2011)

hellrazor said:


> Caught it while browsing reddit.



awesome, so there's a very small chance we can see this for crysis 3


----------



## qubit (Aug 1, 2011)

hellrazor said:


> Caught it while browsing reddit.



And one unconvincing video later... 

Myself and a few others have already debunked this in this thread. The whole root of the problem is the way they're shouting "unlimited" all over the place. Unlimited = infinity and for that you need infinite power, infiinite data, infinite electrical power etc etc. An absolute impossibility.

Notice how after a whole further year of development, they can only show one cheesy low res 480p video with a dodgy sounding narrator and grandiose music? Sounds like one big pisstake, don't it?

This is bullshit: they'll never release it. It's just a ploy to con more investment money. Remember this, I said it here. 

EDIT: All the effects shown here can be done *now* with tesselation in DX11. Just run the Unigine benchmark if you don't believe me.


----------



## TheoneandonlyMrK (Aug 1, 2011)

coming along nicely, cheers for the update vid cheesy. shadows are shit


----------



## erocker (Aug 1, 2011)

Meh, until I hear from a 3rd party about this, it's all still a bunch of BS.


----------



## remixedcat (Aug 1, 2011)

OMG I can't wait for this!!!!!


----------



## Steevo (Aug 1, 2011)

pantherx12 said:


> I can't really explain what I mean other than how I said it.
> 
> It's a start point to render around is all, like how it's quicker for a human to draw something if they already have a basic shape already there, a few lines here and there so you know where the eyes should be and such like.
> 
> Honestly I can't explain it better than that with text sorry : [



That is generally known as bullshit when it is something you can't explain. Or a flying spaghetti monster. Or many other such things. 


All computers understand is math, so for you and I to draw a point somewhere based on a feeling or thought it works, but for a computer the idea is useless and would create more overhead by causing a bunch of underlying simulations to take place to generate the hard math correctly to render what you want.


----------



## streetfighter 2 (Aug 1, 2011)

erocker said:


> Meh, until I hear from a 3rd party about this, it's all still a bunch of BS.


Why don't we just ask Mussels?  I'm pretty sure he's from the same penal colony as the devs.


----------



## qubit (Aug 1, 2011)

Oh and notice how after all this _incredibly_ lengthy development time, it's still _just_ out of reach? Another classic hallmark of a scam.


----------



## xenocide (Aug 1, 2011)

qubit said:


> Oh and notice how after all this _incredibly_ lengthy development time, it's still _just_ out of reach? Another classic hallmark of a scam.



I remember seeing it the first time around and wondering how it would be that some small startup was able to create such fanciful technology, and they just randomly appeared at a big tradeshow.  If you just apply the "It's a Scam" mindset to everything about this, it makes a lot more sense lol.


----------



## qubit (Aug 1, 2011)

xenocide said:


> I remember seeing it the first time around and wondering how it would be that some small startup was able to create such fanciful technology, and they just randomly appeared at a big tradeshow.  If you just apply the "It's a Scam" mindset to everything about this, it makes a lot more sense lol.



Yeah, doesn't it, just!  This is a big thread to read through, but if you search for my posts, I explain in several places the various ways that this is obviously a scam. The whole lot hinges on this ridiculous "unlimited" claim.

To find the posts, click the Games link above this thread and then find the thread in the section. Then click on the number of posts next to it. You'll see a list of all the posters for that thread. Click on my username and you'll see all my posts listed. 

Heck, here's the link: http://www.techpowerup.com/forums/search.php?searchid=14560357


----------



## LAN_deRf_HA (Aug 1, 2011)

qubit said:


> And one unconvincing video later...
> 
> Myself and a few others have already debunked this in this thread. The whole root of the problem is the way they're shouting "unlimited" all over the place. Unlimited = infinity and for that you need infinite power, infiinite data, infinite electrical power etc etc. An absolute impossibility.
> 
> ...



Its available in 1080p now.


----------



## qubit (Aug 1, 2011)

LAN_deRf_HA said:


> Its available in 1080p now.



Link?


----------



## LAN_deRf_HA (Aug 1, 2011)

http://www.youtube.com/watch?v=00gAbgBu8R4&feature=mfu_in_order&list=UL


----------



## douglatins (Aug 1, 2011)

This is getting to long to get implemented, imagine skyrim with that. It would be a mindfacking


----------



## AphexDreamer (Aug 1, 2011)

douglatins said:


> This is getting to long to get implemented, imagine skyrim with that. It would be a mindfacking



Skyrim will be mindfacking no matter what.


----------



## qubit (Aug 1, 2011)

LAN_deRf_HA said:


> http://www.youtube.com/watch?v=00gAbgBu8R4&feature=mfu_in_order&list=UL



I noticed the following:

- the UD rendered island doesn't appear to be in 1080p resolution. Try pausing the video to see what I mean
- There's an odd lack of contrast to it, a bit like you might get looking through light fog. I would expect it to look pin sharp. Fudging it can be good at hiding fakery, such as low res rendering
- If they're rendering down to a grain of sand, then why didn't they zoom in to show this?


----------



## RejZoR (Aug 1, 2011)

Why does everything they show look static and like everything is using geometry instancing for everything? Just looks cheap and fake. All palm trees despite "unlimited" detail look the same. All the grass looks the same. All the rocks look the same. All the architecture looks identical. And i mean like an exact copy identical.


----------



## crazyeyesreaper (Aug 2, 2011)

because thats exactly what there doing to get 20fps in software rendering mode lolz,  its just geometry instancing to safe on resources, etc, still the same old shit nothings changed.


----------



## qubit (Aug 2, 2011)

RejZoR said:


> Why does everything they show look static and like everything is using geometry instancing for everything? Just looks cheap and fake. All palm trees despite "unlimited" detail look the same. All the grass looks the same. All the rocks look the same. All the architecture looks identical. And i mean like an exact copy identical.



Yes, funny that. It all looks so samey and odd.


----------



## Benetanegia (Aug 2, 2011)

They depend on stancing because every distinct model requires several dozen MB of storage. This is and has always been the biggest drawback to voxel rendering. You might be able to "find" the correct voxel in an blink, and you might only need find 1920x1080 voxels to make the picture, but the billions of voxels required to represent a voxel made scene decently still have to be stored somewhere.

So... they cheat. They put the same models over and over and generate some things procedurally, which is not a bad thing to do on it's own, but is not related with their tech at all and it's just bypassing the real problem with voxel rendering. Using procedural methods, you could generate the same detail and randomness with polys, and definately when applying tesselation. What you can't do that way is to create the complex, unique snd varied scenery that we are used to see in modern games.

Quite literally everyone in the gaming industry is researching/has researched voxel rendering and literally all who did have abandoned it or delayed it to 2016 ++. Unlimited Detail is just a hoax or a scam. They are doing voxel rendering and doing it pretty fast according to them... well I'm not even going to try and refute that claim, there's over 50 other voxel renderers out there capable of similar performance (though most use the GPU), already with proper real time demos and proper papers and also patents. None of which UD has shown. It doesn't matter, none of the others are suitable for real time today and neither is UD.


----------



## crazyeyesreaper (Aug 2, 2011)

what he said ^


----------



## LAN_deRf_HA (Aug 2, 2011)

Couldn't you just do a procedural world generation like minecraft, just much much more advanced? Have any manual changes you make to that generated world just alter the original "seed" so you never increase the file size.


----------



## xenocide (Aug 2, 2011)

http://www.reddit.com/r/gaming/comments/j66px/unlimited_detail_circa_2003_not_much_has_changed/

Related.


----------



## RejZoR (Aug 2, 2011)

Eventually, at some point, we will have graphics objects made out of atom like elements (imagine Minecraft with super tiny blocks instead of huge cubes). But this won't happen in 10 or 20 years. It might in 50 years time or more however. Polygons will remain the prefered technique for 3D games. And that's the end of it. There will be new shading techniques and new improvements to the rendering itself but this won't really change.


----------



## TIGR (Aug 2, 2011)

RejZoR said:


> Eventually, at some point, we will have graphics objects made out of atom like elements (imagine Minecraft with super tiny blocks instead of huge cubes). But this won't happen in 10 or 20 years. It might in 50 years time or more however. Polygons will remain the prefered technique for 3D games. And that's the end of it. There will be new shading techniques and new improvements to the rendering itself but this won't really change.



Would you be willing to share your take on this? Does that change what you anticipate occurring in the next twenty years at all? Maybe you were already familiar with it and the work of Kurzweil, etc.


----------



## hellrazor (Aug 2, 2011)

RejZoR said:


> Eventually, at some point, we will have graphics objects made out of atom like elements (imagine Minecraft with super tiny blocks instead of huge cubes). But this won't happen in 10 or 20 years. It might in 50 years time or more however. Polygons will remain the prefered technique for 3D games. And that's the end of it. There will be new shading techniques and new improvements to the rendering itself but this won't really change.



It seems you are simply deluded about how fast computers change.


----------



## Benetanegia (Aug 2, 2011)

hellrazor said:


> It seems you are simply deluded about how fast computers change.



Oh no, for *static scenery* (and disregarding the storage problem) it will be doable in 5 years easily. Don't let the state of current voxel rendering confuse you. The good thing about voxel rendering, specifically sparse voxel octree implementations is that the gains are exponential, because they do not work with cartessian coordinates, data is stored hierarchically. For every of the 3 dimensions, it's 2^n where n is the iteration factor, which depends on the available performance. So if we assume current hardware has a performance factor of 10 (kinda arbitrarily chosen number*) and future generations have a factor twice as high (that is 20, because CPU, GPU, memory BW are twice as fast) we would be talking about massively higher details posible:

2^10 = 1024 posible voxels/"atoms"/dots per dimension. i.e if voxels represent 1 mm in real life we could only represent a scene of just 1 m^3.

2^20 = 1048576 voxel -> 1 km^3 more than you probably need to be seen at any given time

*Realistically most of the voxel implementations nowadays are capable of 12 to 14 iterations at relatively low resolutions, so that means that at 1mm wide voxels, they can represent scenes that are 16 m wide in every dimension. Of course they use voxels that are bigger than that so they represent accordingly bigger albeit less "detailed" scenes.

So technically and performance wise they can do something interesting now and in just a few generations they will be able to do 1000 times more in that regards. The problem of storage still persists though and the technology able to store the required TeraBytes of information that would be required to store a game world (even with current detail) does not exist yet. Holographic disks, hmm maybe.

Summarizing, when it comes to the rendering side of things, very small voxels will be posible very soon, but storing all that info is imposible, so it doesn't really matter if such a detail can be represented on a display or not, because we will not be able to create and store such a detailed world. Current methods are creating the world with polys and then convert them to voxels which is completely redundant and stupid now that we have tesselation and thus smaller-than-pixel triangles are posible or will be posible in just 1 generation. And triangles are easy to animate while voxels are a nightmare.


----------



## remixedcat (Aug 2, 2011)

Crysis 6 system requirements:
12.4Ghz AMD Infinion x82 82 core processor or Intel i24
5TB RAM
4PB SSD
Nvidia Geforce X2X980GTX2 with 1TB DDR10 memory or AMD Radeon 26980
Microsoft GXP Operating system Service pack 8


----------



## LAN_deRf_HA (Aug 2, 2011)

remixedcat said:


> Crysis 6 system requirements:
> 12.4Ghz AMD Infinion x82 82 core processor or Intel i24
> 5TB RAM
> 4PB SSD
> ...



Pffft. Like we'd ever get a service pack 8.


----------



## bpgt64 (Aug 2, 2011)

I'll give a fly squirrel frack when it shows up in a game.  Until then, it does me nothing and it reeks of scam.


----------



## remixedcat (Aug 2, 2011)

I threw that SP8 in there for its and shiggles


----------



## hellrazor (Aug 2, 2011)

By that time (AKA never) Linux will be the dominant desktop OS and we won't have service packs.


----------



## Benetanegia (Aug 2, 2011)

lol it depends. Maybe Microsoft gets infected with the Google virus and starts calling each minor update a new SP version...


----------



## AphexDreamer (Aug 2, 2011)

http://notch.tumblr.com/post/8386977075/its-a-scam

This guy might be trying to scam us but Voxels isn't a scam, it just needs work.


----------



## qubit (Aug 2, 2011)

@Benetanegia

Storing all those terabytes of data on a hard disc isn't the only problem. It will also have to be loaded into graphics RAM and system RAM to be useful, so the problem is compounded many times over. Even if you can load just part of it, the amount will still be massive.



bpgt64 said:


> I'll give a fly squirrel frack when it shows up in a game.  Until then, it does me nothing and it reeks of scam.



Yup, what he said.



AphexDreamer said:


> http://notch.tumblr.com/post/8386977075/its-a-scam



Nice find.


----------



## Benetanegia (Aug 3, 2011)

qubit said:


> @Benetanegia
> 
> Storing all those terabytes of data on a hard disc isn't the only problem. It will also have to be loaded into graphics RAM and system RAM to be useful, so the problem is compounded many times over. Even if you can load just part of it, the amount will still be massive.



Well I'm assuming some very intelligent streaming algorithm. Carmack said he was working on one for use with raytracing (similar to how Megatexure works) and that he thought he could make it work real soon and I have the tendency of not contradicting The man.

Anyway, the ammount needed for a complete game world is immense, but the one needed at any given time is not something unearthly. Like I said there's several engines out there and they are making it work on current hardware, needing between 2GB and 8 GB of RAM and 1 GB - 4GB of vRAM. It CAN work in a near future, but the reality is that there's no point unless they manage to do something with animation, storage size, etc.

Also I'm assuming that storage will increase both in capacity as well as in speed. I mean 100 TB HDD with current (or doubled) read speeds would simply not work, not for this, not for anything. There's several technologies on the horizon that promise much higher speeds that I hope will land soon. Maybe it's been just my impression, but there seems to be much more promise on faster memory tech than on higher density as of late. Maybe it's just my impression based on the fact that in the last decade it has been the absolute opposite.


----------



## RejZoR (Aug 3, 2011)

Ideal would be to have RAM and storage combined. So there wouldn't be any separate RAM or HDD but one unified storage medium. A storage medium so fast it could replace RAM. This would eliminate any need to have super fast bus between storage mediums and RAM and processor. No need to load data anywhere as it would be already in memory all the time. But at the moment, no HDD or SSD technology is capable of that. Maybe when they make a hybrid SSD-RAM which could achieve speeds and latency of system RAM with persistant storage (non volatile) of SSD. But i don't think we'll see this anytime soon.


----------



## animal007uk (Aug 3, 2011)

Seems the news about this is going around, Just went to this website http://www.reghardware.com/2011/08/03/game_graphics_could_be_10000_times_better/ and noticed this story, Just about to have a read before i goto work.

Aparently Euclideon plans to launch an SDK "some months from now", but will it really be the largest breakthrough since 3D graphics began?

id Software's John Carmack reckons there's no chance Euclideon will run on current-gen systems, but has the potential to "several years from now"


----------



## Easy Rhino (Aug 3, 2011)

god not this again. people, stop being dumb. this will never work.


----------



## qubit (Aug 3, 2011)

@animal007uk: the whoile thing's a scam, as myself and several others have said over and over on this thread.

This outfit actually claims an _unqualified_ "unlimited" detail ie infinity, which is impossible. Not only that, but even on a mobile phone! Yeah, it's a scam to dupe "investors".  

No wonder Easy's rolling his eyes at this.


----------



## animal007uk (Aug 3, 2011)

I diden't say it was or wasen't a scam i was just saying the news it getting around everywere, I realy coulden't give a crap about this thing to be honest but there are lots of people who still want to read about this stuff so thats why i posted the link.


----------



## AphexDreamer (Aug 3, 2011)

Yeah I mean if any progress of any kind is being made on this I'd like to know.


----------



## qubit (Aug 3, 2011)

AphexDreamer said:


> Yeah I mean if any progress of any kind is being made on this I'd like to know.



I reckon there must be progress and significant progress at that, or they wouldn't release another dodgy video.

That "progress" will be duping investors into parting with their money. Heck, I'm surprised that this outfit hasn't been nailed for fraud yet.


----------



## NinkobEi (Aug 3, 2011)

probably vaporware.


----------



## hellrazor (Aug 3, 2011)

I would really laugh if this all turned out to be real.


----------



## Benetanegia (Aug 3, 2011)

hellrazor said:


> I would really laugh if this all turned out to be real.



Do you think that a company that is claiming to be a breakthrough in technology would have a site like this?

http://www.euclideon.com/home.html

Compare it to Atomontage: http://www.atomontage.com/

Atomontage is far more advanced on making a complete engine suitable for games and they do not make any unrealistic claim as UD, because the pros and cons are very well known, especially the cons.


----------



## Drone (Aug 3, 2011)

Mmmm and those atoms consist of .... tiny polygons? 

Lol unless it's no quarks I'm not buying!


----------



## erocker (Aug 3, 2011)

Drone said:


> Lol unless it's no quarks I'm not buying!



There's nothing _to_ buy. I'm looking forward to this thread dying down, then in a few months time some "journalist" will post some story about it and the discussion will repeat itself again. This is like watching paint dry.


----------



## qubit (Aug 3, 2011)

Benetanegia said:


> *Do you think that a company that is claiming to be a breakthrough in technology would have a site like this?*
> 
> http://www.euclideon.com/home.html
> 
> ...



Yeah, exactly. Talk about sparse and fake looking. 

And what's with the scroll bar and the huge scrolling of the home page right off the top of the browser window! :shadedshu A moron would have to design a website like this.


----------



## remixedcat (Aug 3, 2011)

I may have to contact the Euclideon people to make them a better site LOL.


----------



## seronx (Aug 9, 2011)

> The Unlimited Detail engine is claimed to function more like an advanced search algorithm rather than the standard 3D engine found throughout today's video games and 3d applications. Dell states that UD's processing approach is analogous to quickly accessing words by using the search function in a word document, or instantly finding thousands of results simply by typing in a query in Google™ . Unlimited Detail utilizes an advanced point cloud search algorithm to sift through all of the data and picks out only the points needed to render the current frame. In this method, UD can construct limitless worlds and render only the portions that the camera sees, taking into account factors such as the camera's perspective and location. The number of points returned is dependent on the current resolution of the screen – for example, a resolution of 1024x768 would return that many points, one for each pixel. Other factors in determining exactly what points are needed include the distance of the object from the camera, which objects are overlapping others, the scale of an object, and so forth, but all of this is efficiently handled by a method referred to as Mass Connected Processing. Mass Connected Processing allows the engine to process large amounts of data simultaneously and apply small changes to each part at the end of the cycle. “Its job is to find one atom for every pixel on the screen and not touch any of the others, it took us fifteen years to perfect the technique (it began as a hobby). What we have now works very well. We are able to display pictures with no geometry limitations at 24-30 fps 1024*600, one core with out any graphics hardware assistance, and we have only just begun to optimize so we are hoping to double that without any hardware assistance.”



http://en.wikipedia.org/wiki/Hidden_surface_removal

This is how it gets past the memory blockade

It's not voxels as nothing has volume

5760x1080 resolution equals .75MBs of Point Cloud Data Possible on the screen everything else is a texture
.75MB of Point cloud data at the resolution at 900fps => 675MB/s

and now add the fact that textures are in this
675MB/s x 24 bit(3 Bytes per Atom) - 65536 bit(8 Kilobytes per Atom) => 15.8 GB/s to 42.2 TB/s

Hopefully they will get out of the mentality of CPU only and get the GPUs to work on this


----------



## ctrain (Aug 9, 2011)

http://www.gamedev.net/topic/607677-unlimited-detail-back-again/


----------



## bucketface (Aug 9, 2011)

if somehow they've managed to find a way around the massive storage requrements for voxel objects then this would be totally wicked sic supercalifragilisticexpialidocious. 
@qubit
the whole "unlimited detail" is marketing and should be taken as such to mean virtualy unlimited or the details looks as if it were unlimited. ofcourse it's limited. 
on the other hand maybe it is just bs/patent troll, it's all just guess work at the moment. We'll probably be a little close to an answer in a few months when the sdk is supposed to be released.


----------



## qubit (Aug 9, 2011)

bucketface said:


> if somehow they've managed to find a way around the massive storage requrements for voxel objects then this would be totally wicked sic supercalifragilisticexpialidocious.
> @qubit
> the whole "unlimited detail" is marketing and should be taken as such to mean virtualy unlimited or the details looks as if it were unlimited. ofcourse it's limited.
> on the other hand maybe it is just bs/patent troll, it's all just guess work at the moment. We'll probably be a little close to an answer in a few months when the sdk is supposed to be released.



It's a full-on scam, as myself and others have explained on this thread ad nauseum. They actually claim unlimited detail ie _infinity_... and even on a mobile phone! They never qualified their unlimited claim and just big it up for all its worth. An infinite amount of _anything_ in this universe is impossible. If you don't realize that...

In the meantime, they have some iffy videos with unconvincing repeated-blocks graphics and a weird-sounding narrator who claims to be the CEO. They have a really simple, scammy-looking website and oh and they're perpetually "round the corner" from revealing all. Yeah, pull the other one. 

Apparently this little joke has been going on since 2003. Kinda long enough for the big reveal, no? :shadedshu


----------



## Benetanegia (Aug 9, 2011)

seronx said:


> http://en.wikipedia.org/wiki/Hidden_surface_removal
> 
> This is how it gets past the memory blockade
> 
> ...



 God. That only accounts for the voxels/points shown on screen. But ALL the points (relevant to a particular scene) have to be loaded to memory at some point. Believing otherwise is naive and disingenuous. Following the Google search example, it's as if I claim:

- I've searched the term "scam", which is 4 letters at 1 byte each, equals 4 bytes. And since the search algorithm only has to search the letters that I stablished... BAM! there it is, the whole internet is only 4 Bytes!

Back to reality, the CPU does not know which points have to be selected, nor does the HDD know which ones to send. Out of all the points/voxels*, at least some (based on locality, and actually quite a few according to their claim of "unlimited" detail) must be loaded to memory and the CPU needs the chance to compare their relative position according to the camera position distance, etc. THAT is the search algorithm, which is nothing new nor radically different from what everyone else does.

* Voxels or points is irrelevant, none really represent a volume mathematically. Voxels just represent a subdivision of the 3D space, but do not contain any volumetric data themselves, just a hierarchical address. At any rate voxels accupy a lot less than point cloud data. In fact most point cloud data scanned with 3d scanners is first converted to either voxels or polygons, because working with point cloud data is nearly imposible. Another reason that their claims are false and thus it's a scam.

From the mouth of Minecraft creator: http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam



> Why it’s a scam:
> 
> * They pretend like they’re doing something new and unique, but in reality a lot of people are researching this. There are a lot of known draw-backs to doing this.
> * They refuse to address the known flaws. They don’t show non-repeated architecture, they don’t show animation, they don’t show rotated geometry, and they don’t show dynamic lighting.
> ...


----------



## bucketface (Aug 9, 2011)

qubit said:


> It's a full-on scam, as myself and others have explained on this thread ad nauseum. They actually claim unlimited detail ie infinity... and even on a mobile phone! They never qualified their unlimited claim and just big it up for all its worth. An infinite amount of anything in this universe is impossible. If you don't realize that...
> 
> In the meantime, they have some iffy videos with unconvincing repeated-blocks graphics and a weird-sounding narrator who claims to be the CEO. They have a really simple, scammy-looking website and oh and they're perpetually "round the corner" from revealing all. Yeah, pull the other one.
> 
> Apparently this little joke has been going on since 2003. Kinda long enough for the big reveal, no?



I've watched the videos aswell and they look like just about any other promo vid. Trying to garner interest and excitement with distorted facts about the product. 
anyway all i'm saying is that unless you have access to their product you cannot say that it is absolutly a scam. you are welcome to your opinion ofcourse but it's just that until proven other wise and no you cannot prove, without a doubt, that it is a scam unless you somehow got access to thier product.
*i'm not defending this thing just saying that it may not necessarily be a scam. my stand is cautiously optimistic. who knows, they may just be able to pull a rabit out of the hat.

Isn't the CEO the same Dell that started DELL the company? 
Narrator is Australian but yeah has a weird Aussie accent.

Also the universe itself is technically infinite... but thats not part of the discussion.


----------



## scaminatrix (Aug 9, 2011)

Man, his accent sounds exactly the same as the voice I use when prank calling someone. Sounds like a p*sstake voice, like Lloyd Grossman. Or like Fonejacker.


----------



## ivicagmc (Aug 9, 2011)

When you put it simply... Every point of space in real world caries information of 3 dimensions of space, and one dimension of time and matter of field that is in that point. Not much different in virtual world, since it is trying to duplicate the real world. In other to have more complex virtual world we need to store and process more data. There are tricks to do that more easily, like duplicating objects and that is exactly what they do in games... This looks like some slight advanced version of something like that if it is true.
Great thing would be if there could be an algorithm that could create world randomly in great detail, based on information that it has. You bye a game and every time you play it it is different...


----------



## qubit (Aug 9, 2011)

bucketface said:


> I've watched the videos aswell and they look like just about any other promo vid. Trying to garner interest and excitement with distorted facts about the product.
> anyway all i'm saying is that unless you have access to their product you cannot say that it is absolutly a scam. you are welcome to your opinion ofcourse but it's just that until proven other wise and no you cannot prove, without a doubt, that it is a scam unless you somehow got access to thier product.
> *i'm not defending this thing just saying that it may not necessarily be a scam. my stand is cautiously optimistic. who knows, they may just be able to pull a rabit out of the hat.
> 
> ...



No, it's not just "my opinion", it's definitely a scam. There's enough evidence of it, as has been explained over and over on here. Of course they're not gonna come out and admit it, are they? 



scaminatrix said:


> Man, his accent sounds exactly the same as the voice I use when prank calling someone. Sounds like a p*sstake voice, like Lloyd Grossman. Or like Fonejacker.



+1 He has that piss-takey sound about him alright.


----------



## The_Ish (Aug 9, 2011)

It's one thing to render the same few objects over and over again. It's a whole other entierly to rander thousands of objects that doesn't have anything in common with each other.


----------



## Funtoss (Aug 9, 2011)

I saw this in my computing class this monday, my friend showed it to me and i was like woah!!! far out.. i m wondering if its real or fake though?


----------



## bucketface (Aug 9, 2011)

qubit said:


> No, it's not just "my opinion", it's definitely a scam. There's enough evidence of it, as has been explained over and over on here. Of course they're not gonna come out and admit it, are they?



i think it's more likely a case of pulling a Peter Molyneux, eg. promising the world and only being able to deliver the moon.


----------



## OneMoar (Aug 9, 2011)

I don't get how people keep buying into this garbage


----------



## Athlon2K15 (Aug 9, 2011)




----------



## qubit (Aug 10, 2011)

OneMoar said:


> I don't get how people keep buying into this garbage



Yeah, it's almost like a religious belief. :shadedshu


----------



## The_Ish (Aug 11, 2011)

New interview with Bruce Dell (August 10th)

http://www.hardocp.com/article/2011/08/10/euclideon_unlimited_detail_bruce_dell_interview


----------



## LAN_deRf_HA (Aug 11, 2011)

Well that was fairly convincing. It will be quite amusing if they can finalize a dev kit before the next xbox and ps3 launch. When every game looks like real life on any old hardware why buy some new $500-800 console? Incentives will have to move to ai and physics processing, not to mention a lot of useless bells and whistles. I'm looking at you Wii U.


----------



## qubit (Aug 11, 2011)

The_Ish said:


> New interview with Bruce Dell (August 10th)
> 
> http://www.hardocp.com/article/2011/08/10/euclideon_unlimited_detail_bruce_dell_interview



Damn, it's a video... and it's blocked at work. 

I'll have to critique it later.

So, to those that can view it, does thing seem real?


----------



## crazyeyesreaper (Aug 11, 2011)

meh seems more like the guy the sent got the same rigamarole as everyone else and dosent understand how it all works in the first place 

again others before Euclideon have gotten further but still havent managed to get anywhere its nothing but the same things instanced over and over and over and over.

and Carmack himself says its not possible right now, its not IMPOSSIBLE but its not possible right now, which is what pretty much everyone has stated. and there just reusing the same footage they've already released. no actual relevance to a game,

until they have footage of Characters running around in a level that dosent reek of reused instanced items then ill be impressed, as of right now even older games from 2000 have more variance in objects.


----------



## Benetanegia (Aug 11, 2011)

qubit said:


> Damn, it's a video... and it's blocked at work.
> 
> I'll have to critique it later.
> 
> So, to those that can view it, does thing seem real?



Just the same thing, only you can see them playing the demo and Dell answers some stupid and irrelevant questions, as in repeating what he said on the previous demo: nothing new about the technology is shown and the only relevant thing to the video IMO is that Dell has no f...g idea of what a voxel or tesselation is, while essentially confirming that they are doing Sparse Voxel Octree, even though he has no idea of what that is and says they are not doing that. Judging by the video, he basically thinks that voxels are just small blocks and that tesselation is just a fancy type of bump mapping.

Well since they play the demo we can see it's not prerendered at least, but I don't think anyone thought that. The interview does look like it has been staged anyway.

All in all 40 mins of my life wasted. Oh but I still recommend watching it.


----------



## qubit (Aug 11, 2011)

Thanks for the info Crazy, Bene.

So the HardOCP guy was bamboozled by this BS? I guess this helps explain why HardOCP benchmark graphics cards by manually playing games through every time, instead of using scripted tests that give nice consistent results.


----------



## qubit (Aug 12, 2011)

Phew, just got through 40 minutes of watching it and listening to that guy's odd voice. :shadedshu and I can now give the critique you've all been waiting for.

Here's some observations I made while watching it:

- Still claims "unlimited detail" without qualifying it. In fact, he said near the start that he'd explain it, but I don't remember seeing one

- One of the original claims were that all this could run on a simple mobile phone. Yet, the hardware demonstrated was a high end gaming laptop with GTX 460 graphics. No mention of having this run on low powered hardware was made. He should have been challenged on this

- Animation “7 years old” and really low res – admitted it looks awful. Heck, Half-Life is 12 years old and looks miles better! Didn’t show later version of the animation, but excused it as not being ready and won’t show something unfinished. The world looks complete so what's so hard about putting together a basic animation to show the interviewer?

- “No polygons”, but there are. Pause the HardOCP video at time index 16:36, where the big green round thing is showing. Look carefully at the edges, especially on the right hand side and you will see some polygons ie straight edges. They’re not very clearly defined, but present nonetheless. His "point cloud" shouldn't show any artefacts like that. The fact the screenshot is taken with a video camera really helps to hide them, huh? This could be damning and needs to be investigated

- Zooming in real close to a palm tree did make the textures look blurry, just like in a conventional game. “Unlimited detail” would retain it’s clarity and reveal more detail as you got closer. This detail clearly had a limit

- Dell _Does_ finally claim that the tech is voxels, but in “unlimited quantities”. Once again, the claim is unqualified. Give me a break

- A leaf on the ground got zoomed in close and looked really detailed, but could it have just been a texture, given the way the tree looked when zoomed in close?

- I noticed how he avoided zooming in real close to anything generally and the couple of times that he did, it looked very much like it was done with polygons.

- Several times he referred to the incredible number of polygons that were on the screen. But I thought you use "atoms" not polygons...?

- Once again, the amount of repetition is phenomenal and not something you expect to see with "unlimited detail"

- Didn’t say how much memory all this takes. This is a critical parameter and not once did the interviewer ask this really obvious question. From what people on this thread have explained, the memory requirement is crippling

- Zoomed out from the elephant until it disappeared. It didn’t change, like you tend to see in modern games. It just got smaller. This is a good point in its favour, but it can be achieved by simple scaling

- Said that it's all running in software, but will use the GPU once the tech is refined. Yeah, maybe, but I wanna see proof. This laptop had a powerful graphics card. Dell claims that it would run just the same on a 1994 graphics, basically anything that can display a picture. Prove it by running it on Intel integrated graphics

- Dell's explanation of tessellation was off, like benetengia said

- The demo was only running at a lowly 1024x768. The laptop looks like it has a native res of 1366x768, so why wasn't the graphics mode changed to use the whole screen? Is it because the whole thing would have run a lot slower and less impressively, perhaps?

- Benetenegia reckons that the interview was staged and I tend to agree. There were cuts in a few places where Dell was starting to get into an explanation and the video was abruptly cut off and jumped to something else. I really wanted to see exactly what this explanation was

- The "unlimited" claim was never explained or qualified. A dead giveaway that he's hiding something. Once again, you can't use infinite anything in this universe

- The interviewer was constantly bowled over and fawning at Dell and seemed afraid to challenge Dell with _his own_ tough questions. He only quoted other people that asked them

- Dell plays Carmack and Notch off against each other, saying that they are claiming opposites, but his argument didn't hold

- Dell's manner didn't come across as terribly sincere and it did feel like he was selling snake oil. This is an impression though, so don't take it as hard and fast

In summary, I still think this is fishy. I want to see some independent and respected third party (not HardOCP) pick apart this technology and verify the claims, if they're ever given the chance.


----------



## The_Ish (Aug 12, 2011)

I know nothing of creating a game engine, but it would be infinitely stupid to make all these claims when you're receiving GOVERNMENT funding, and ending up with nothing to show for it.


----------



## ctrain (Aug 12, 2011)

Benetanegia said:


> Just the same thing, only you can see them playing the demo and Dell answers some stupid and irrelevant questions, as in repeating what he said on the previous demo: nothing new about the technology is shown and the only relevant thing to the video IMO is that Dell has no f...g idea of what a voxel or tesselation is, while essentially confirming that they are doing Sparse Voxel Octree, even though he has no idea of what that is and says they are not doing that. Judging by the video, he basically thinks that voxels are just small blocks and that tesselation is just a fancy type of bump mapping.
> 
> Well since they play the demo we can see it's not prerendered at least, but I don't think anyone thought that. The interview does look like it has been staged anyway.
> 
> All in all 40 mins of my life wasted. Oh but I still recommend watching it.



Yeah, the demo was definitely working, but he seemed remarkably oblivious to... well, just about everything. He made it sound like he did the programming initially, but his explanations of things were pretty horrible. he didn't sound like he knew what he was talking about at all.

ERM LEVEL OF DISTANCE (erm, detail???)
WE HAVE KEYBOARD SUPPORT???
UM YEA WE'RE DOING GOOD ON MEMORY...
CAN I DRIVE CAN I DRIVE LET ME DRIVE I MADE IT MY SISTER PLAYS THE SIMS
OH GOD DON'T CRASH OH GOD... finally it happens and after making it seem like he was hiding something, they don't have collision detection. Err, ok... not sure what he was so worried about with getting right next to something.

Well ok, he sure didn't explain a whole lot. I can see some stuff making sense, like the WOO IT WILL WORK THE SAME ON ANY GPU. That's logical enough considering the GPU is probably just handling the blit / flip and nothing else. Not surprising.

The interviewer was kind of busy riding his nuts more than anything though rather than asking interesting questions, like what happens when you run out of detail? Do you get gaps, does it try to fill in the blanks, or do your atoms take on a more typical voxel look to make up the size gap?


----------



## MilkyWay (Aug 12, 2011)

I still believe polygons are crap and are going to be phased out but not any time soon, usually with things like ray tracing the engine needs to be powerful and have powerful hardware to run it.

Point cloud data and a search algorithm to determine only what you need sounds good in theory but tbh there isn't any real evidence like everyone else in this thread has already agreed. If it really was that great then you would have thought someone else would be looking into it.


----------



## The_Ish (Aug 12, 2011)

MilkyWay said:


> I still believe polygons are crap and are going to be phased out but not any time soon, usually with things like ray tracing the engine needs to be powerful and have powerful hardware to run it.
> 
> Point cloud data and a search algorithm to determine only what you need sounds good in theory but tbh there isn't any real evidence like everyone else in this thread has already agreed. *If it really was that great then you would have thought someone else would be looking into it.*



Well, someone has to be first


----------



## qubit (Aug 12, 2011)

The_Ish said:


> Well, someone has to be first



It's been 7 years+ in the making! Something good would have interest from others and you'd see parallel development, but there isn't any.


----------



## MilkyWay (Aug 12, 2011)

The_Ish said:


> Well, someone has to be first



Sometimes you get people simultaneously developing the same thing. Companies are leaning towards ray tracing ect not points, there must be a reason why and it cant just be well they wouldn't be able to sell graphics cards any more.


----------



## Benetanegia (Aug 12, 2011)

In reality many people are looking into it. That's why we are skeptics. It's not that no one is doing it, plenty of people are doing it, just not at the level of performance they are claiming, and not something that coould be used in games. That's the key, dozens of proven graphics experts have been looking into this kind of rendering since a long time ago. John Carmack included, who has been looking into this kind of thing since the 90's and always returned to polygonal rendering because the technology is just not there*, it just cannot catch up polygonal rendering let alone surpass it. Every time that it may look like it could, games improve again and the posibility dissapears. UD is far from being posible on current hardware because of the many things mentioned already. 

For example, they continue to show the demo at 1024x768 and get 15 frames per second. Since the tech is entirely based on pixels, at 1920x1200 they would need 3x times the power, plain and simpe. Still the demo lacks animation, proper shadows, post-process effects and hell even shaders so that wood actually looks like wood and rocks like rock. That makes up >90% of current games performance demand and they completely lack it. Sure they are still not using the GPU but there's absolutely no proof that such a thing would yield any real or significant gain in performance. GPUs are not designed for "search algorithms".

* and it never is. Just consider what I said above about resolution, just like any backwards rendering tech (i.e ray-casting or ray-tracing) it's weakness is that it scales "poorly" as resolution is increased. So just imagine that in 3 years their tech is improved, PCs are faster and they are ready for HD rendering, but oh too bad the norm now is 4000x2000 OLEDs and you are behind the curve again.


----------



## crazyeyesreaper (Aug 12, 2011)

because ray tracing is achieveable within the next year or 2 with top end desktop equipment

aka GTX 680 sli 7970 xfire would be if what is expected comes true powerful enough for ray tracing etc in a current gen game,  thats why were moving toward ray tracing, polygons as shown by tessellation can be pumped up rather liberally at the high end spectrum, with little consequences, add in displacment + normals then add in truly dynamic ray traced lighting and shadows you can get a much better image,  Voxels can do some crazy stuff but its still 5-10 years away , ray tracing is 2 years away.  Quake 3 engine and i believe Half Life 2, can be run with Ray Tracing on current hardware today,  Voxels are all well and good but while things might look a bit better, without realistic lighting and shadowing its going to be a small improvement over polygons.


----------



## MilkyWay (Aug 12, 2011)

Real time ray tracing is hard to process i still think its a few years away easily.


----------



## Benetanegia (Aug 12, 2011)

I don't think ray-tracing is any closer than voxel rendering tbh and in fact they are not mutually excludable. Actually, for pure ray-tracing voxels (SVO) are far better than polys, but ray-tracing is not the be all end all of game rendering anyway. Hell it's not even the be all end all of CG movies.

I'd like to see ray-tracing on games soon, but most people, Carmack included again, have pretty much demostrated that it's not the best option for games yet. A mixture of raster for direct light/shading and ray-tracing for indirect light and reflections is not out of the question though, but most papers coincide in that once you port some of your rendering tasks to ray-tracing, you may as well move everything so as to mantain coherence and make performance expectations more stable (imagine the 2 engines fighting for BW).


----------



## qubit (Aug 12, 2011)

He's panning running around the tree and claims to be rendering "21 trillion polygons". Notice anything wrong with that?

1 A trillion anything will bog down a supercomputer, let alone a laptop. There won't be enough memory to hold so much data, either. Time index 25:33

2 Polygons? What happened to "atoms"?


----------



## LAN_deRf_HA (Aug 12, 2011)

qubit said:


> He's panning running around the tree and claims to be rendering "21 trillion polygons". Notice anything wrong with that?
> 
> 1 A trillion anything will bog down a supercomputer, let alone a laptop. There won't be enough memory to hold so much data, either. Time index 25:33
> 
> 2 Polygons? What happened to "atoms"?



You're being a little ridiculous, unless you're on the autistic spectrum then I suppose it's not your fault. Anyways it was very clear he wasn't being serious/literal if you were paying any attention to the prior dialog. He was doing equivalents and exaggerations. Now for the "but something, side argument". I have no problem with skepticism but for fucks sake do it right or don't open your mouth people. The quality of commentary here is, as usual on the internet, completely lacking. Most of what I'm seeing here is the typical jump on the bandwagon bashing by people who have no fucking idea what they're going on about. Not that that's surprising.


----------



## Benetanegia (Aug 12, 2011)

Well I assume that a trilion is 1,000,000,000,000 in this case or just one tera and not the long scale one an exa.

So 1,000,000,000,000 atoms at XYZ + RGBA each, 1 byte each dimension (I'm assuming no normals even though you would clearly want normals for many reasons):

1,000,000,000,000 * 7 * 1 Byte = 7 terabytes of info onscreen. wow just wow 



LAN_deRf_HA said:


> You're being a little ridiculous, unless you're on the autistic spectrum then I suppose it's not your fault. Anyways it was very clear he wasn't being serious/literal if you were paying any attention to the prior dialog. He was doing equivalents and exaggerations. Now for the "but something, side argument". I have no problem with skepticism but for fucks sake do it right or don't open your mouth people. The quality of commentary here is, as usual on the internet, completely lacking. Most of what I'm seeing here is the typical jump on the bandwagon bashing by people who have no fucking idea what they're going on about. Not that that's surprising.



Oh enlighten us. Sorry it really seems the *"people who have no fucking idea what they're going on about"* are clearly on the other side of this argument. Mr. Dell being the first one on that list, when he has no clue about what tesselation is, nor what a voxels is or how it's being used, nor what LOD is ffs! The fact that he cannot even recognise a tech like Atomontage for what it is when it's 100% the same thing they are doing is specially worrying (a physics engine? excuse me??)...


----------



## qubit (Aug 12, 2011)

LAN_deRf_HA said:


> You're being a little ridiculous, unless you're on the autistic spectrum then I suppose it's not your fault. Anyways it was very clear he wasn't being serious/literal if you were paying any attention to the prior dialog. He was doing equivalents and exaggerations. Now for the "but something, side argument". I have no problem with skepticism but for fucks sake do it right or don't open your mouth people. The quality of commentary here is, as usual on the internet, completely lacking. Most of what I'm seeing here is the typical jump on the bandwagon bashing by people who have no fucking idea what they're going on about. Not that that's surprising.



Don't talk rubbish, man. And I'm not autistic, ok?  Your comment was very condescending and insulting to anyone that is. 

Look at that video again. He quite literally excused the performance of the demo, because it was rendering "21 trillion polygons", no qualifiers, no other context. My two points about it stand.


----------



## MilkyWay (Aug 12, 2011)

qubit said:


> Don't talk rubbish, man. And I'm not autistic, ok?  Your comment was very condescending and insulting to anyone that is.
> 
> Look at that video again. He quite literally excused the performance of the demo, because it was rendering "21 trillion polygons", no qualifiers, no other context. My two points about it stand.



Im on the "spectrum" and i found it a tad insulting.


----------



## The_Ish (Aug 12, 2011)

qubit said:


> Don't talk rubbish, man. And I'm not autistic, ok?  Your comment was very condescending and insulting to anyone that is.
> 
> Look at that video again. *He quite literally excused the performance of the demo, because it was rendering "21 trillion polygons"*, no qualifiers, no other context. My two points about it stand.



How do you suppose you render 21 trillion polygons?


----------



## Frick (Aug 12, 2011)

qubit has the opposite of a hardon here and he wants us all to be a part of it.


----------



## qubit (Aug 12, 2011)

The_Ish said:


> How do you suppose you render 21 trillion polygons?



You haven't read my post previous to that have you?



Frick said:


> qubit has the opposite of a hardon here and he wants us all to be a part of it.



Muppet.


----------



## The_Ish (Aug 12, 2011)

We all know rendering 21 trillion polygons is not possible. In other words, it couldn't have been 21 trillion *polygons*. I know that much.


----------



## qubit (Aug 12, 2011)

The_Ish said:


> We all know rendering 21 trillion polygons is not possible. In other words, it couldn't have been 21 trillion *polygons*. I know that much.



I was simply pointing out Dell's absurd claims and unsurprisingly, you agree with me.

So what was your point then?


----------



## scaminatrix (Aug 12, 2011)

Come on peeps, it would be nice if this thread was kept open...

Let's just try and remember that we are *all* talking from theory. Neither of us are more accountable than the other.


----------



## Benetanegia (Aug 12, 2011)

The_Ish said:


> We all know rendering 21 trillion polygons is not possible. In other words, it couldn't have been 21 trillion *polygons*. I know that much.



Well obviously Dell is talking about the equivalent of 21 trillion polys, which I guess that actually means 21 trilion atoms in his nomenclature. In reality for the same geometry/shape you need far more points (or "little tiny atoms") than you do with triangles, because a triangle (or a small group of tris) actually represents or aproximates a surface, while many points, much more points are required to aproximate that same surface. 

Neither claim is realistic, far from it and in fact, it is false and misleading and that's why it is a scam. They say their technology can render trillions of "polygons", but in reality they are only rendering a couple million (some Crysis 2 scenes have almost 3 million of unique polys), repeated over and over and over and over again, at 1024x768, 15 fps and they didn't even made the effort to rotate the instanced geometry. And the reason for that is simple, they can't, only rotating all those objects would trash performance to low singe digits. The inevitable truth is that they can not deliver 5% of what they are claiming. Their rendering technique is valid, and like I said many other people are using it (and arguably better like Atomontage), but Euclideon is deliberately lying about what it can realistically do and about the viability as a game engine.


----------



## Frick (Aug 12, 2011)

qubit said:


> Muppet.



I know it's bogus (even though I think some parts of it probable could be used), I just think it's fun how you have to say the same things over and over and over again.


----------



## The_Ish (Aug 12, 2011)

There are countless inventions that weren't developed by more than one man.
AC current (Nikola Tesla), the first known scuba gear (Da Vinci).. So why can't this man have figured out this? Time will tell, but I for one is excited and hopeful. Like I said, you don't get away two million dollars by shrugging your shoulders and saying "I guess we were wrong". Dell also said they had enough money to complete what I imagine is some kind of working version of it. It's easy to be a critic. At one point people thought the earth was flat, and when proven wrong, they struggled to accept it. I don't think it's impossible. He's obviously onto something. I seriously doubt the government would just hand him the money like that. That's really the extend of it.
Critic or not, time will tell who was right. And I think I'm right in saying most of us are hopeful at least. 

We could probably replace oil, but being a tin hat, I think the oil companies don't want that.
Now why would nVidia want this? Far reaching as it is, it makes sense. No need for a GPU, no more nVidia GPU's. That's a lot of money. Just an example obviosuly, I know they said they would take advantage of the GPU eventually. When there is money to be made, people will do just about anything to keep it away from the mainstream.


----------



## qubit (Aug 12, 2011)

Frick said:


> I know it's bogus (even though I think some parts of it probable could be used), I just think it's fun how you have to say the same things over and over and over again.



Yeah, it's bogus all right and Bene seems to have the technical aspects of it down.  It does look like they're dressing up voxels as new tech to dupe investors into giving them money.

Repeating the same thing over and over? Yeah, kinda like their graphics, innit?  It depends on the context though. I'll say it if someone looks like they're getting suckered and they need to be educated and other times, to review the HardOCP video for example.

EDIT: I took out the slap smiley now I know you weren't just being mean.


----------



## Mussels (Aug 12, 2011)

when the best argument for it is "its theoretically possible, based on random chance for this guy to have invented this technology", its kinda proving the point that its fake.


seriously, no one here has any arguments to prove its true other than 'it aint impossible for someone to invent it'


----------



## Benetanegia (Aug 12, 2011)

The_Ish said:


> There are countless inventions that weren't developed by more than one man.
> AC current (Nikola Tesla), the first known scuba gear (Da Vinci).. So why can't this man have figured out this? Time will tell, but I for one is excited and hopeful. Like I said, you don't get away two million dollars by shrugging your shoulders and saying "I guess we were wrong". Dell also said they had enough money to complete what I imagine is some kind of working version of it. It's easy to be a critic. At one point people thought the earth was flat, and when proven wrong, they struggled to accept it. I don't think it's impossible. He's obviously onto something. I seriously doubt the government would just hand him the money like that. That's really the extend of it.
> Critic or not, time will tell who was right. And I think I'm right in saying most of us are hopeful at least.
> 
> ...



See? That's the problem, you can only rely wishful thinking and that is the only thing that UD has by his side.

"Some other people managed this or that, so why can't this man have figured out this?"

Because it's not the same thing at all. For instance Tesla didn't discover AC, it was a very well known concept long before he decided it was the best way of carrying power and he was far from being alone. But most importantly his tech and claims didn't go against the laws of physics, quite the opposite.

Same for Da Vinci, he created some tech long before anyone else even considered it, but it didn't go against the laws of physics that were very well known by then. Quite the opposite again, the relatively vast knowledge of that time fully supported his designs.

But you simply can't compress or condense or compact (the 3 actually mean slightly different things) data for 21 trilion "anything" to fit current technology. And this claim is supported by the fact that no one and neither he is showing 21 trillion, nor billion not even million anything on the demo, and it's not because they are not artists, but because it's the biggest flaw for that kind of tech, whether it is actually using voxels or point cloud data (far less posible with the latter). 

Replication or using the proper term instancing is one of the strong points though, because the data is stored in "data trees" and in order to replicate a seemingly complicated and very detailed model, you only have to copy paste the parent "tree" and all the branches automatically come with it. So it's false and unrealistic to claim what he does, when showing what he is showing. And it's false and deliberately misleading to excuse that by saying we are not artists, because that's simply a lie. It's as if while trying to get a job as an assistant, I claim that I can write 10.000 words in just 60 seconds and when asked for proof I write "scam" and then ctrl+c and ctrl+v until I get a line, then copy paste that line until I get a large paragraph, then 10 paragraphs then 100 paragraphs and so on. If you really want to grasp at strws I DID "write up" 10.000 words, but this achievement is useless and my claim *within the context is a blatant lie.*

EDIT: Oh and as a final touch. When confronted by the fact that I copy pasted the same word over and over I could simply say, "ey I'm not Shakespeare ok?" *but just imagine what writers could do with this ability*...


----------



## The_Ish (Aug 12, 2011)

So if voxels aren't new, and this guy is working on something similar, doesn't that mean there has been parallel development? In contrary to what people have been saying so far.
edit/ Tesla did in fact, with the help of George Westinghouse make AC standard. I'm sure it was "discovered" before that. But it's the same with voxels. The tech in itself is old.
I'm not trying to prove or disprove, but to say there most likely will be parallel development for anything worth a fck is somewhat narrow minded in my opinion.

http://www.google.com/patents?id=uY...urce=gbs_overview_r&cad=0#v=onepage&q&f=false

Mussels (below): Yes, *so far*.


----------



## Mussels (Aug 12, 2011)

The_Ish said:


> So if voxels aren't new, and this guy is working on something similar, doesn't that mean there has been parallel development? In contrary to what people have been saying so far.



kinda, because no ones made it work. its been deemed impossible every time.


----------



## Benetanegia (Aug 12, 2011)

The_Ish said:


> Mussels (below): Yes, *so far*.



They have not proven they are any closer to achieving it than the others and in fact they are actually one step behind many others. Honestly why these guy gets so much attention and (surprisingly) credit, when the others are not even known is far beyond my comprehension.

Ok I actually know why, beause of the inflated numbers. While others are showing achievable (read modest) results*,  UD is claiming the imposible and imposible it IS but...

* but with real, usable environments and because of that they are 10 times more impressive. Really is laughable the way in which Dell downplays and missrepresents Atomontage (for example), saying it is "limited, we are unlimited" and whatnot... it's limited for a good damn reason!


----------



## The_Ish (Aug 12, 2011)

Meh, I'm leaving this discussion. It's not leading anywhere soon.


----------



## AphexDreamer (Nov 22, 2011)

I believe Notch was shown the code and was proven wrong. 

He updated his original post. 

http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam


----------



## qubit (Nov 22, 2011)

AphexDreamer said:


> I believe Notch was shown the code and was proven wrong.
> 
> He updated his original post.
> 
> http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam



Where's the update that Notch was proven wrong? UD is a scam for sure. I've explained it enough times on this thread already, lol.


----------



## erocker (Nov 22, 2011)

AphexDreamer said:


> I believe Notch was shown the code and was proven wrong.
> 
> He updated his original post.
> 
> http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam



Barely updated. Most of his thoughts still stand. UDT is still BS. Please let this die.


----------



## OneMoar (Nov 22, 2011)

erocker said:


> Barely updated. Most of his thoughts still stand. UDT is still BS. Please let this die.



lock it for the love of god lock it away ......


----------



## NinkobEi (Nov 22, 2011)

vaporwear. lock it!


----------



## Irony (Nov 22, 2011)

I thought of this years ago, as soon as I learned how GPUs do their work. I don't see why this wasn't invented first.

However, for the simple reason that anything simple, new, efficient or better than the current loved known must die, it will not become anything for a good while; if ever.


----------



## ShogoXT (Nov 22, 2011)

Regardless of whether or not this is actually viable. When the hell did Notch become the expert on graphics engines!?!?!?

Seriously!?!??!


----------



## qubit (Nov 22, 2011)

ShogoXT said:


> Regardless of whether or not this is actually viable. When the hell did Notch become the expert on graphics engines!?!?!?
> 
> Seriously!?!??!



One does not have to be an expert on graphics engines to realize that UDT is BS.

The whole problem with UDT has been explained several times over earlier in the thread by myself and a few others, if you'd like to read up on it.


----------



## erocker (Nov 22, 2011)

OneMoar said:


> lock it for the love of god lock it away ......



But I love seeing the hope in people's posts that this may happen one day... even though it won't.


----------



## AphexDreamer (Dec 2, 2011)

Their 2011 Update nay sayers. 










Found the vid from the discussion on Crydev.net 

http://www.crydev.net/viewtopic.php?f=126&t=60111&start=75

The video is pretty amazing if you ask me.


----------



## Frick (Dec 2, 2011)

If they actually release an SDK it would be interesting to see what (if) this does really.


----------



## AphexDreamer (Dec 2, 2011)

Frick said:


> If they actually release an SDK it would be interesting to see what (if) this does really.



Some guy on youtube tries to explain it.

"Reading the article on GameInformer about this, it's actually quite simple. Only one atom is displayed per pixel, with a super efficient algorithm to find out what atom is shown on each pixel. Therefore, now matter how many atoms there are, the same number of atoms are being rendered. You could show a planet in this thing with zero lag."

I'd like to find that GameInformer Article but really sleepy (3:30AM). 

It is nice to see an update.


----------



## erocker (Dec 2, 2011)

Nothing new. It isn't any more real today as it was a few months ago.


----------



## Frick (Dec 2, 2011)

erocker said:


> Nothing new. It isn't any more real today as it was a few months ago.



Totally.


----------



## qubit (Dec 2, 2011)

They've just added a few more complete-looking rendering animations, one with a gun and the colours don't look quite so samey. They've sped up some of the animations too, for some reason. Still nothing to suggest that they're not using tessellation.


----------



## pantherx12 (Dec 2, 2011)

qubit said:


> They've just added a few more complete-looking rendering animations, one with a gun and the colours don't look quite so samey. They've sped up some of the animations too, for some reason. Still nothing to suggest that they're not using tessellation.



There's quite a far bit to suggest other wise man, it is definitely using voxels it's the other stuff people are dubious about.

As in, it being in real time, it being a doable way to make a game etc.


I'm still dubious but tessellation has nothing on the level of detail 

Also this came out a while ago but for people who missed it http://www.youtube.com/watch?v=JVB1ayT6Fdc

Interview with them.

Skip to 25 minutes in or so for real time controlled demo ( as in they're moving about willy nilly with a controller)


----------



## LAN_deRf_HA (Dec 2, 2011)

Anyone notice Atomontage Engine? Unlike unlimited detail it's actually made progress, looks great, and they give way more details on just what the engine is doing. Seems a little nuts all the attention unlimited detail has gotten when this pretty much unknown engine is radically better.

http://www.atomontage.com/?id=gallery


----------



## Benetanegia (Dec 2, 2011)

LAN_deRf_HA said:


> Anyone notice Atomontage Engine? Unlike unlimited detail it's actually made progress, looks great, and they give way more details on just what the engine is doing. Seems a little nuts all the attention unlimited detail has gotten when this pretty much unknown engine is radically better.
> 
> http://www.atomontage.com/?id=gallery



I've linked to Atomontage several times myself.

Atomontage is not ready for prime time either, well maybe for strategy games where there's not going to be close ups. It looks blocky still and it's not really the engines fault, but hardware limitations. Again the problem is not rendering, but creating and storing the data, and bandwidth requirements in order to load all that data. But in any case they are light years ahead of UD and it's real.

Regarding the UD video, it's the exact same video we saw several months ago. I don't see any improvement nor anything new. What animation? I see none there. Is it really the proper link? I think I'm viewing a different video or something.


----------



## pantherx12 (Dec 2, 2011)

Benetanegia said:


> I've linked to Atomontage several times myself.
> 
> Atomontage is not ready for prime time either, well maybe for strategy games where there's not going to be close ups. It looks blocky still and it's not really the engines fault, but hardware limitations. Again the problem is not rendering, but creating and storing the data, and bandwidth requirements in order to load all that data. But in any case they are light years ahead of UD and it's real.
> 
> Regarding the UD video, it's the exact same video we saw several months ago. I don't see any improvement nor anything new. What animation? I see none there. Is it really the proper link? I think I'm viewing a different video or something.




The one aphex linked to is old it was the second video posted, the one I linked to is the 3rd video and has some extra bits ( like the real time section)

Even if this is BS it's still fun to watch the development.

Bruce Dells voice is damn annoying though


----------



## qubit (Dec 2, 2011)

Benetanegia said:


> What animation? I see none there.



Assuming you're referring to my post, I'm talking about the movement through the virtual world has been sped up. The world itself was indeed not animated.


----------



## Benetanegia (Dec 2, 2011)

pantherx12 said:


> The one aphex linked to is old it was the second video posted, the one I linked to is the 3rd video and has some extra bits ( like the real time section)
> 
> Even if this is BS it's still fun to watch the development.
> 
> Bruce Dells voice is damn annoying though



Yeah that one was posted several months ago too, and shows no progress either. No animations basically. The thing being real time is of no consequence, because it's not their rendering method which is questioned. There's literally hundreds of voxel rendering engines out there (many made by students*) and all of them do the exact same thing, render one voxel/point per pixel. The rendering algorithm is actually far simpler than rasterization from what I've read*. The problem is always on how to move data around. It's the exact same problem that ray-tracing or ray-casting rendering has.

Until they release a demo so that everyone can use it, test it and scrutinize it, this is all pure BS. They could have 1 TB of data on that laptop for all we know, and use flags so as to know what needs to be renderend/loaded to memory and what not depending on where they are (rendering based on cells), which is something nearly all 3D engines used to do in the past, but it's imposible to do today because of the sheer ammount of work it would require to put flags on the many ammount of detail that games have today.

* Apparently nearly every single student learning how to program game engines starts a ray-casting project, believing it's the best thing ever. Of course they all end up realizing that no matter how efficient the method is for putting pixels on screen (theoretical pixels, as in only weighting in the input and output of the algorithm), the problem is not there, but on how to create and store that data and how to move that data from HDD to main memory to cache.

i.e You can make a wall with 2 polys and 1 texture, but you would need several thousands if not millions of voxels/points to represent the same wall. An sphere is all the same, you can represent/fake an sphere/circle with a very limited amount of polys/lines and shading, but you would require and infinite amount of points for doint it without leaving holes.



qubit said:


> Assuming you're referring to my post, I'm talking about the movement through the virtual world has been sped up. The world itself was indeed not animated.



Ah, since you mentioned guns, I thought you were watching a different video. I later realized it's the scenes from Bulletstorm that have guns on it.


----------



## qubit (Dec 2, 2011)

Benetanegia said:


> Ah, since you mentioned guns, I thought you were watching a different video. I later realized it's the scenes from Bulletstorm that have guns on it.



And there's me thinking they'd added something to it. :shadedshu I really couldn't be  bothered to sit through the whole 7 minutes of that video, so skipped through it and hadn't realized those were from Bulletstorm. It explains why the curves looked angular like they do on every other game I've seen. Thanks for clarifying. 

So meh, nothing new here, let's move along.


----------



## pantherx12 (Dec 2, 2011)

They've pretty much stated they won't be posting any news for another year ( when they posted their last video)

So I guess we can let the thread die until then XD


----------



## scaminatrix (Dec 2, 2011)

So nothing new except for more shading. I'm pretty sure most of the video is reused material from the old video.



pantherx12 said:


> Bruce Dells voice is damn annoying though



He sounds like Stewie Griffin after puberty. Very annoying.


----------



## AphexDreamer (Dec 2, 2011)

pantherx12 said:


> They've pretty much stated they won't be posting any news for another year ( when they posted their last video)
> 
> So I guess we can let the thread die until then XD



And when they release the SDK Qubit will still be saying how this is fake.

I mean he doesn't even watch the videos... he has it so set in his mind that it is fake. It is real, but whether or not you could develop a game with the technology is questionable. 

We shall see in due time how far they manage to get. The detail was pretty sweet though.


----------



## qubit (Dec 2, 2011)

AphexDreamer said:


> And when they release the SDK Qubit will still be saying how this is fake.
> 
> I mean he doesn't even watch the videos... he has it so set in his mind that it is fake. It is real, but whether or not you could develop a game with the technology is questionable.
> 
> We shall see in due time how far they manage to get. The detail was pretty sweet though.



Yup, it's fake and it's not only me saying it. There's lots of tells to demonstrate this and all this has been explained several times over near the start of this thread. I'll just repeat the most pertinent bit: Dell claims completely unlimited detail _with no qualifier at all_ and repeats this claim over and over (he even did so in that sham interview, which I did watch all of) and reckons it can run on a mobile phone. Unlimited is another way of saying infinity in regular language. Nothing in this universe can do infinity, hence he's talking BS.


----------



## AphexDreamer (Dec 2, 2011)

qubit said:


> Yup, it's fake and it's not only me saying it. There's lots of tells to demonstrate this and all this has been explained several times over near the start of this thread. I'll just repeat the most pertinent bit: Dell claims completely unlimited detail _with no qualifier at all_ and repeats this claim over and over (he even did so in that sham interview, which I did watch all of) and reckons it can run on a mobile phone. Unlimited is another way of saying infinity in regular language. Nothing in this universe can do infinity, hence he's talking BS.



The Unlimited part is just a name and it makes sense that he chose that name for the technology. What you can do in the Engine is now technically unlimited but of course restriction still apply, like RAM, Hard Drive space (not sure if his engine does anything procedurally) but not the restrictions of detail you can put in game which is still restricted by the art designers/developer anyways. So no, it never was truly infinite, restrictions always apply. Even the scanned images of objects have a finite detail. 
In the Video I believe he says the detail is 100,000 times that of a polygon count game (Not sure which one he is referencing) and he even converts how many polygons it would equate too (a rather large number), that is obviously a finite answer.


----------



## pantherx12 (Dec 2, 2011)

qubit said:


> Yup, it's fake and it's not only me saying it. There's lots of tells to demonstrate this and all this has been explained several times over near the start of this thread. I'll just repeat the most pertinent bit: Dell claims completely unlimited detail _with no qualifier at all_ and repeats this claim over and over (he even did so in that sham interview, which I did watch all of) and reckons it can run on a mobile phone. Unlimited is another way of saying infinity in regular language. Nothing in this universe can do infinity, hence he's talking BS.




Protip : infinity and unlimited are not number. 

I.E what he's saying is " As much detail as you care to put into it"

English sah, you need to brush up


----------



## qubit (Dec 2, 2011)

AphexDreamer said:


> *The Unlimited part is just a name* and it makes sense that he chose that name for the technology.



It could be just a name, but it isn't given the way he's used it without a qualifier. The fact that he used some numbers in the demos doesn't change anything.


----------



## pantherx12 (Dec 2, 2011)

qubit said:


> It could be just a name, but it isn't given the way he's used it without a qualifier. The fact that he used some numbers in the demos doesn't change anything.




Unlimited
" Not limited or restricted in terms of number, quantity, or extent."

That is ALL he is saying, he's saying there isn't a cap to the detail.

As in theory it will only display 1920x1080 voxels on a hd screen the amount of voxels in total that you could have you only be limited by storage.

That is unlimited.

The software hasn't got a limit.

( All of this is assuming it's true of course, but seems you misunderstand what unlimited means)


----------



## AphexDreamer (Dec 2, 2011)

qubit said:


> It could be just a name, but it isn't given the way he's used it without a qualifier. The fact that he used some numbers in the demos doesn't change anything.



It actually does and in his interview he even says, "Unlimited Geometric Power". 

PantherX posted it. 

Interview he even says 100,000x more power than the current polygon system.

I think you take him out of context.




pantherx12 said:


> Unlimited
> " Not limited or restricted in terms of number, quantity, or extent."
> 
> That is ALL he is saying, he's saying there isn't a cap to the detail.
> ...



Thanks.


----------



## qubit (Dec 2, 2011)

"Unlimited geometric power" is still claiming infinity... :shadedshu

Maybe he said 100k in the interview, but that's still a ridiculous number and his whole push on it is that there's no limit to what this can do. So far they've been "working" the better part of a decade on it and still have only a few dodgy-looking demos to show for it.

I'll believe it when I see it.


----------



## AphexDreamer (Dec 2, 2011)

qubit said:


> I'll believe it when I see it.



Even though he shows it, you mean experience it.

So be it.

He gets hammered with question in his interview, but since that is 40 min long and you couldn't see a 7 min video I take it you won't even touch the link.

It even talks about Notches Claims. If it is fake, why? It isn't like he has a paypal link with each of his videos.. He claims to be well funded as well. 

Text Version I believe. 

http://www.ausgamers.com/features/read/3094648

Here is your DX11 part. 

Several of the things you covered - like the realistic looking trees, and ground, and rocks - can be achieved with tessellation under DirectX 11 - how is your approach better? 

Euclideon: Well I'd like to proceed compassionately here. Tessellation is nice, I like tessellation, it was a proposed solution to the problems with low polygon counts and it was designed by some clever people who tackled the problems that the present polygon system brings in a very good way, but no I don’t think that tessellated height bumps are better than real geometry if you put the tessellation picture next to unlimited detail there is a pretty big difference. [See picture below]


----------



## pantherx12 (Dec 2, 2011)

qubit said:


> "Unlimited geometric power" is still claiming infinity... :shadedshu
> 
> .



Did you just ignore the definitions of infinity and unlimited?

It is not what you think.

If I had unlimited money it wouldn't mean I had a wallet FILLED with money that I could never empty.

It could mean someone just gives me the money I require each time I need it.


I think two many people think of unlimited like the way an unlimited money cheat works in a game lol or unlimited health.

I mean it's literally in the word it's self.

Un-limited.

Literally all it means that there isn't a limit.

D:


Like you can have a road with an unlimited max speed, doesn't mean cars travel over it at over 9000 miles per hour does it?

Just means you can drive how ever fast you like.


----------



## nv40pimp (Dec 2, 2011)

pantherx12 said:


> Did you just ignore the definitions of infinity and unlimited?
> 
> It is not what you think.
> 
> ...



FIXED lol


----------



## qubit (Dec 2, 2011)

AphexDreamer said:


> He gets hammered with question in his interview, but since that is 40 min long and you couldn't see a 7 min video I take it you won't even touch the link.



I did watch the 40 minute interview, I told you. The short video, I just skipped through, without the sound. Perhaps I could force myself to sit through it, but it won't change anything, especially as there's no new rendering that Dell's demoed.



pantherx12 said:


> Did you just ignore the definitions of infinity and unlimited?



No, while your definition of unlimited is correct, it amounts to the same thing as infinity.

Think about it, my computer could go faster and faster if I simply upped the clock speed all the time, as I needed to. Any system can hit infinity if you just keep adding to it, but everything stops somewhere in the real world. Therefore, him claiming unlimited is just so much BS. If he'd claimed from the start say, that the system is 50 times more powerful than the best conventional rendering system, then that would be a claim worth taking seriously, with a demo to back it up, of course. 

btw, what was interesting, was that in the interview, he was using a rather high powered laptop - something that he wasn't really that keen to admit to the interviewer and it wasn't rendering particularly fast, either. So, if that was slow, how is a mobile phone to render this at all?


----------



## AphexDreamer (Dec 2, 2011)

It's just a tech demo for a method so if you fail to saturate it with unlimited detail, it's the content developers fault as the system is designed to be able to do it.

Which is why he just repeats things, it is easier to show what it can do then coming up with new content to take up a whole lot of space. 

Also the interview shows the older versions running but both the newer and old version run software only. 

They have gone far in just 2.5 years and they seem pretty dedicated to prove all those saying it isn't possible wrong.


----------



## qubit (Dec 2, 2011)

Yup, let them prove it when it's finally "ready".


----------



## AphexDreamer (Dec 2, 2011)

qubit said:


> Yup, let them prove it when it's finally "ready".



Well at least prove it to hardcore skeptics, others have seen enough proof. To each their own. 

(The guy has a video of it running in real time) If you need to play it to believe it you will just have to wait a bit longer.


----------



## Irony (Dec 2, 2011)

I seems very animated. Its hard to tell if hes actually done anything, or just mapped out a scene.

Also, I find it funny how many people want this to exist. Qubit puts up one post after this has been dead for days, and theres already like another 100 comments.


----------



## qubit (Dec 2, 2011)

Irony said:


> I seems very animated. Its hard to tell if hes actually done anything, or just mapped out a scene.



Yup.



Irony said:


> Also, I find it funny how many people want this to exist. Qubit puts up one post after this has been dead for days, and theres already like another 100 comments.



It wuznt me guv! It was AphexDreamer in post 330! says qubit ratting him out.  j/k

But yeah, there's something about this that just keeps people at it - just look at who's got the most posts in this thread.  I think it's that sense of almost, _almost_ there that does it. Like all the best scams, they tantalize you forever without ever quite getting there. Nearly a decade (or was it a whole decade? Forget now) and this is all they have to show for such an allegedly powerful - _UNLIMITED!_ - technology? Hmmm...


----------



## Frick (Dec 2, 2011)

I assume qubit don't like it when products have abstract names. Megaman, how is he mega? Here's a company named Rentals Unlimited, you should write a letter to them. They're not unlimited. Simcity 3000 Unlimited is also stupid, I know as a facts that you cannot build unlimited cities. Autodesk is a sinner as well.

I don't understand how you can be hung up on that. They guy does throw around the term a bit too much but it sound like your avarage sales talk to me.


----------



## Easy Rhino (Dec 2, 2011)

not this again. is he selling real estate on the moon now?


----------



## qubit (Dec 2, 2011)

It's more than just the name, Frick (which is bad enough). I've explained it all ages ago and so have others, if you want to go back over the thread.



Easy Rhino said:


> not this again. is he selling real estate on the moon now?



No, it's Mars, lol.


----------



## Frick (Dec 2, 2011)

qubit said:


> It's more than just the name, Frick (which is bad enough). I've explained it all ages ago and so have others, if you want to go back over the thread.



I've read that, and most of the posts go on saying it's a hoax. And you have talked about "unlimited" an awful lot:



qubit anno 2010 said:


> People are calling them out as frauds, because they're claiming to handle an infinite amount of detail/data. As we all know, nothing in this universe can do that, so the claim is bollocks.
> 
> If they'd simply billed it as a new, hyper-efficient way of dealing with a huge volume of data giving say, a 100-fold improvement in rendering speed, then I'd buy it and look forward to the official tech demos and description of the technology.
> 
> But they didn't.



I do believe it's a hoax. I really do, mostly because of the execution of the thing. Everything about it is bloody awful actually. But we might get lucky. We will not for sure, but we might. Now he's telling us there's an actual SDK coming pretty soon, so let's get back together in May and see what happaned.


----------



## pantherx12 (Dec 2, 2011)

Still don't understand how people can call it a scam, a fake yeah go ahead that's your opinion, but who the hell is he scamming?

You have to be taking money/property from someone for it to be a scam, so far they've not hassled anyone for money.


----------



## qubit (Dec 2, 2011)

Frick said:


> I've read that, and most of the posts go on saying it's a hoax. And you have talked about "unlimited" an awful lot:



Yes, I confess! I confess!  



Frick said:


> I do believe it's a hoax. I really do, mostly because of the execution of the thing. Everything about it is bloody awful actually. But we might get lucky. We will not for sure, but we might. Now he's telling us there's an actual SDK coming pretty soon, so let's get back together in May and see what happaned.



You got a thanks just for inflating my ego with "qubit anno 2010" and digging out that quote, let alone the rest of the post. That was awesome, thankyou. 

Yes, indeed. We might just get lucky, so let's just wait for them to prove it. The onus is on them, not us to "believe it".


----------



## Easy Rhino (Dec 2, 2011)

pantherx12 said:


> Still don't understand how people can call it a scam, a fake yeah go ahead that's your opinion, but who the hell is he scamming?
> 
> You have to be taking money/property from someone for it to be a scam, so far they've not hassled anyone for money.



the reason i personally think it is a scam is because i think it is fake and he is trying to lure potential investors.


----------



## AphexDreamer (Dec 2, 2011)

Easy Rhino said:


> the reason i personally think it is a scam is because i think it is fake and he is trying to lure potential investors.



Then he is going to need a lot more than youtube videos and an interview to do that.

And since he already got a grant and he isn't asking for money (Claims he has more money than he needs to do the project actually)  then I suppose he is doing it all for popularity then? Must not have been loved as a child.


----------



## Steevo (Dec 2, 2011)

I like the perfectly square objects and the exact same trees and the lack of movement. 




Still fake.


----------



## erocker (Dec 2, 2011)

Personally, I can't wait for Santa to bring me UDT!! Best Christmas evar!


----------



## Easy Rhino (Dec 2, 2011)

AphexDreamer said:


> Then he is going to need a lot more than youtube videos and an interview to do that.
> 
> And since he already got a grant and he isn't asking for money (Claims he has more money than he needs to do the project actually)  then I suppose he is doing it all for popularity then? Must not have been loved as a child.



beats me. wake me when we something tangible. like a game that looks better than bf3 using this tech.


----------



## AphexDreamer (Dec 2, 2011)

Steevo said:


> I like the perfectly square objects and the exact same trees and the lack of movement.



Ok call it fake but really? Perfectly Square objects? You guys are just saying anything now. 

The repeating stuff is cause it would be really hard to come up with tons of new stuff to take up all that space, plus they say countless times they aren't artists or graphics designers.


----------



## Easy Rhino (Dec 2, 2011)

guys! i have come up with UNLIMITED!!!111!1 detail technology. now i have a screenshot. go easy, because i am not a graphic artist. anyone want to invest in my technology?



Spoiler












note: i am not trolling, but using this as an example of how ridiculous this whole thing is.


----------



## erocker (Dec 2, 2011)

That's literally hundreds of pixels. Amazing!

Sorry AD, there's just nothing new in regards to UDT. I don't understand why you felt the need to bring this thread up again. Perhaps we should shut it down until a reputable 3rd party comes out and says this isn't some joke. This topic's discussion has been saturated.


----------

