Perhaps you all may be aware of this, but this hasn't come to my attention until recently. Australian-based Euclidean is a company focused on developing innovative animation rendering software, and claimed as of 2011 that they have mastered realtime 3D animation to a point of calling it "Unlimited Detail." Their claim is that their rendering techniques, once fully developed, will be able to run graphics 100 times the amount that currently exist with realtime rendering/hardware restrictions. Their rendering technique relies on 3D "atoms", which is nothing new to animation, but they have supposedly developed algorithms to withstand billions of them instantly. Whether or not all of their claims are true, they still managed to receive a $2 million grant from the Australian government. Could this be the next generation of 3D realtime rendering? Take a look at their YouTube demonstration here:
I don't know, you're going to need some serious power to run that. I can't believe that a different way at looking at objects can suddenly make computers run seemingly faster and better. Polygons would be easier to run as they're flat connected planes. An object in this new idea is made up of millions of dots, which is probably harder to run. If you've got yourself a Nvidia Plex, sure, this is possible Also that guys voice is seriously bugging me.
Working a bit with animation, that is exactly the same reaction I had. The computer would have to remember every single position of those millions of dots! However, rendering programs also have to calculate the spaces between the dots to form a polygon, for thousands of polygons. My guess is that removing the processing that it takes for the computer to process the edges between two points of a polygon for all of the polygons speeds it up, thus allowing you to have more processing room for millions of dots. (My assumption could be a stretch, but it kind of makes sense) Anyway, I laughed what you said about his voice. Sometimes you need a voice like that to sell a product ;-)
Anything that advances GPU technology can be extrapolated into medical and physics fields. I'm all for using dots!
Absolutely, it would be universally applicable to any field that uses modeling. Engineering was the first thing that came to mind, but medical and physics fields would make great use of it. That's a funny way of putting it, but essentially yes, the goal is to enable better quality even with lower hardware standards.
Just imagine how hard it would be to create a map if you have to spawn and place each grain of sand like he keeps going on about. >.<
Patience is key! I suppose no one has ever built a sand sculpture one grain of sand at a time... Euclideon is creating a 3D model conversion program that converts the most popular 3D model formats into the dot-models. If you wanted to, you could create your 3D models as usual, then convert them afterward.
Yes, they've been around for a while, but allegedly what this company has arranged operates quite differently. I suppose I can't look into their methods more until they release more information.
This was posted on another forum about half a year ago and the general consensus was that the people involved had done some dodgy things in the past and that this was completely fake and just a way to get people to invest money into something that'd never happen. i.e. rendering like that is easy but as soon as you apply animation, it will grind to a halt.
If this is the case, that's really disappointing, as they managed to get over $2 million for the project. (Whatever the Australian equivalent to $2 million is). As too applying motion to the 3D models, that is much more understandable given that the only moving object in their realtime demos was the camera placement. Unless they release more realtime renders of their software, there is too much room for skepticism.
It's the opposite here, voxels and octree are already use in medical imagery (3D tomography in particular, so a mix of physics and medicine) where the main advantage is displaying 30GB+ models by only loading the visible and pertinent voxels in memory. The data structure makes it easy to ignore details smaller than the screen resolution, and occlusions are made de facto (hidden stuff isn't loaded). Another nice thing is that it can be compressed with pretty much the same algorithms as jpeg, using Fourier Transforms which are highly efficient (n-complex to (de)compress and high compression, at a tunable loss) or simply by representing them as a sparse matrix, which saves place by not storing "empty" voxels, nor their "child" voxels. Both can also be done of course. Exemple: http://www.youtube.com/watch?v=ke6-kwuN0Rs Infinite details are boasting too much IMO, almost no results (except those scenes made of the same models repeated over and over...) and they're everywhere saying they'll revolutionize everything and that they will make crysis 3 works on pentium 90mhz because their shit is SO OPTIMISED (ok I exagerated a bit here, but this is how they sound.). This is the future of gaming: And this source is more credible than infinite details, it's a research paper from Nvidia and scholars... FG
That demo was super sexy! My favorite was the illumination. I have never seen computer generated illumination before. That's just incredible!
Well the advantage of the voxel octree is immediate here, as it can calculate the lightning at the same time it "explores" the octree. Of course it got to calculate more stuff, but it can be parallellized easily and it profits from the same advantages of occlusion/definition-vs-resolution. IIRC a single pass is required (unless you want to add post-processing effects like AA), although I don't remember if the reflections and "bounce" was done using a single pass. I think that voxel exploration while doing light calculation is called "ray tracing", but I'm not quite sure. An interesting read if you're geeky on the sides: http://www.nvidia.com/docs/IO/88972/nvr-2010-001.pdf