You know, I see a lot of talk about unbiased GPU rendering. Most newer GPU based renderers seem to be based on some sort of unbiased rendering engine of one type or the other and what I would like to know is if unbiased rendering was so slow and now it runs so fast with GPU computing acceleration techniques why don't we see more biased rendering engines being adapted to GPU computing too cause I guess that if you could they would run blazing fast and just like in CPU rendering much faster than the unbiased ones too.
I made a few guesses as to why is this. It could be a number of things, It could be that the appeal of the highly photorealistic capability of such renderers has a lot of allure, it could be that programing wise they are easier to adapt to GPU computing, etc. I just don't know what is the exact reason and I would like to know.
One of the reasons that I bring this up is that not everybody wants photorealistic renders all the time, many times what you want is an efficient render or sometimes you want the look of simple ray traced images. If somebody could show me a renderer that renders regular raytraced images in a biased way very, very fast on a GPU I would be interested in that and I know that many people out there would be interested too cause even if GPU based unbiased renderers run so fast that is still not instantaneous and what many animators sometimes want is not always photorealism, sometimes they want that but in many instances what they want is a fairly good looking render that is fast and efficient and sometimes they want a non-photorealistic rendering of one kind or the other and there are several types.
Back there in one of the posts user tomb said this: "Either way, lux supports both biased and unbiased integrators as plugins so it's quite possible to integrate such techniques if anyone feels the itch" so I wondered if what he meant was that with the future luxrender versions it could do both biased and unbiased renderings that are GPU based. I would like to know if he was referring to that. I would like some input on this from somebody.
I know that the V-Ray people created their new GPU based renderer (V-Ray RT) but I noticed that the new renderer is an unbiased renderer whereas the older renderer is a biased one. What I wonder again is why most companies or open source groups creating GPU based renderers are using unbiased engines instead. Is there an obstacle to create a biased renderer that is GPU based? If so what is the issue? Is it not an obstacle but a decision made based on rendering quality merits? Any other reasons?
I want to make clear that it is not that I don't like unbiased renderers cause I do, it is just that I like some of the characteristic of biased renderers too and personally I would like to have the capability of using both types. And I know that a lot of people out there like some of those characteristics of biased renderers too. For example one of the things that I like about biased renderers is that the rendering time is finite and in unbiased renderings is sort of like it keeps going and refining the image until you are satisfied with the results.
I'm not too clear as how this plays into animation cause in animation you render many, many frames and you cannot render the second until the first is fully finished so for unbiased rendering to be used in animation I guess that you would have to do a few test renders and wait until you get a decent quality and then specify somehow a finite time for each rendered frame based on your tests. I would like to know more about that cause I don't have that much experience along that line. I have rendered many short animations with biased renderers but never with an unbias renderer. I have rendered stills with unbiased renderers though but I have only done a small amount in comparison with other methods.
Sometimes graphic artists like the "computer graphics looks" of renderings because for some illustration work it is very efficient and its "clean" or "regular look" is beautiful for some types of images and also helps sometimes to convey and idea with ease due to it looking so clean and simple. You see this type of illustration in magazines and you see animations rendererd like this for TV documentaries etc. Some of us prefer that type of CG look sometimes instead and I wrote a book recently in which I put this to good use due precisely to its "beautiful simplicity" look and because of its efficiency.
For example sometimes what I want is to take a 3D model and render it with Ambient Oclussion alone or sometimes I want to render it with AO and one or a few other light sources combined but not a fully photorealistic render. Sometimes I would like to use full GI instead to render a still cause it takes longer than with AO but it looks better and different. For some images I want speed and AO is more than enough and for some images I want full GI and I use Blender and in Blender I can have AO with the internal renderer but I can have GI if I use another renderer just like I do with Yafaray. Besides there are other types of non-photorealistic use of renderers today that have a good use like cartoon rendering and many other types and this is still evolving at the moment.
So will we also see other types of renderings that are not photorealistic that are GPU based? Will there be biased GPU renderers in the future? Will there be simpler GPU based renderers that allow you to use AO only when you want to instead of full photon mapping based renderings all the time or no AO at all and just simpler raytracing (and keep in mind that there are different degrees of raytracing quality and features alone)? What are the obstacles to this if any? I would like the more knowledgeable of you out there to give the readers of this forum and me more input on all this.