LuxRender Render settings - LuxRender Wiki
Luxrender GPL Physically Based Renderer

LuxRender Render settings

Personal tools

From LuxRender Wiki

Jump to: navigation, search

This page contains an overview of the inner workings of the rendering engine and expands on the use of various variables. Although it may be interesting and useful for advanced users, the information presented here is not strictly necessary to generate nice looking renderings - usually using presets results in a sound selection of settings.

Contents

Introduction to LuxRender's rendering process

The goal of the rendering process is to create an image that is very close to how a scene would look in the real world. In order to achieve this, the way light behaves should be replicated. Luckily, light has been studied rather well and there is even a known formula that describes the behaviour of light pretty accurately. Calculating the solution to this formula is what LuxRender does.

The way LuxRender approaches the process is by painstakingly calculating the illumination values at huge numbers of points on the camera surface - one by one. The principle of finding the illumination value for a certain point can be depicted like this:

schematic view of the rendering process: 1 = camera surface, 2 = camera, 3 = scene geometry, 4 = light source, 5 = path


The first step in the process is to define points on the camera surface for which the light intensity should be calculated.

Once a point is chosen, a surface integrator constructs a ray between a light source and the camera surface, taking the characteristics of the camera into account. The ray may be a direct straight line, but more typically a ray is reflected by multiple surfaces before hitting the camera plane.

The calculation of the reflection direction of a ray on the surface of an object is complicated somewhat by the fact that light is typically scattered in various directions when reflecting on a surface. Instead of splitting up the light beam into multiple directions, a single direction is chosen, based on the surface material's reflection properties and probability. Based on the light source brightness and material properties, the integrator calculates the resulting light intensity on the camera surface.

After the surface integrator has done its work, the volume integrator will calculate the effect of participating media (such as smoke) and adjust the calculated light intensity for this effect. This results in the final light intensity on the desired point of the camera surface, to which we will refer as sample.

After a number of samples has been calculated, an image needs to be generated. The filter decides to which pixels a calculated sample contributes. The tone mapping process converts calculated light intensities to colour values of pixels.

Renderer

The renderer is a "container" of sorts for the system outlined above. Each renderer contains a different set of surface integrators. The different renderers are to accomadate differences in how the surface integrators go about their work, as follows:

sampler

The sampler renderer is the "classic" LuxRender, it contains all the surface integrators present in versions prior to 0.8. It is a standard, CPU-based raytracer.

  • available surface integrators:
    • Direct
    • Path
    • Bidirectional
    • ExPhotonMap
    • Instant Global Illumnation
    • Distributed Path

hybrid sampler

Hybrid sampler is a modified form of "sampler" that supports GPU-acceleration via OpenCL and the LuxRays Library. Hybrid sampler will use your computer's graphics card to handle calculating the ray's actual flight through the scene, freeing your CPU to handle things such as the filter and sampler. The available surface integrator, path, has the same settings as its counterpart in the regular "sampler" renderer, with the exception of light strategies. Path supports the "all", "auto", and "one" strategies.

  • available surface integrators:
    • Path

sppm

SPPM is an experimental stochastic progressive photon mapping integrator. It perform a series of photon mapping passes, refining the image progressively. See the SPPM page for more info

  • available surface integrators:
    • SPPM

SLG renderer

The SLG renderer packs SmallLuxGPU directly into LuxRender with automatic scene translation. This mode allows for pure GPU rendering with a limited subset of lights and materials. See the GPU page for more information.

Sampler

A critical part of the above process is how to pick the location of a sample and which direction to select when reflecting a ray. These, and many other similar decisions, are based on "random" numbers. The sampler is responsible for generating all the numbers needed for a sample. A good sampler will distribute the numbers evenly (both within each sample and between samples), while avoid forming predictable patterns.

LuxRender's samplers can be divided into two categories: "dumb" and "intelligent". The "dumb" ones will generate samples, that is, pick locations and directions etc, without looking at the resulting sample values. That is, there's no feedback. The "intelligent" ones generate samples based on the results of the previous samples.

As "dumb" samplers don't analyse the values of calculated samples, they are a bit faster than the intelligent ones. However, except for very simple scenes this advantage gets lost as the rendering itself will be less efficient. This is especially noticeable with many light sources or specular materials and caustics. On the other hand, due to their non-adaptive nature, "dumb" samplers usually behave in a more predictive manner.

Therefore, for most regular scenes using an "intelligent" sampler is recommended. The "dumb" samplers are recommended for animations (where their predictive behavior is needed), really simple scenes or quick previews.

metropolis

The metropolis sampler is an "intelligent" sampler that uses the Metropolis-Hastings algorithm and implements Metropolis Light Transport (MLT). The metropolis sampler tries to "seek the light". This makes it a good choice in almost all situations. It does this by making small random changes to an initial reference sample and looks to see if the new sample is more interesting, I.E. provides more light. If it is not, then the sample is discarded and a new sample is taken. If the new sample is a nice bright sample, metropolis will adopt the sample as the new reference sample. Then it explores the surrounding area using very small path mutations. This process of changing a sample is called a small mutation. This behavior can also lead to fireflies (overly bright spots in the image).

This process of mutation allows the metropolis to efficiently locate and explore paths which are important. However in order to avoid the sampler getting stuck on some small but very bright area it will once in a while generate a completely random sample and force this to become the new reference sample. This is called a "large mutation".

The "maxconsecrejects" (Maximum Consecutive Rejects) parameter controls when to generate a path mutation. The default value is 512, so if 512 samples are discarded it generates a new path mutation (time to look somewhere else). Then the "lmprob" (Large Mutation Probability) parameter is used to determine the chances of generating a large path mutation (new completely random sample from somewhere else in the image) or a small path mutation (something nearby). Before a sample is added to the film, the metropolis sampler decides if the sample should be accepted as the new base for mutations or rejected (the previous sample is then used instead). Lowering the maxconsecrejects parameter introduces bias and mutes light sources and caustics. For caustics higher values are better. Raising the lmprob value also introduces bias, setting it to 1 turns metropolis into a dumb random sampler. Lower values are less biased and will produce a more realistic result. Be careful not to bring this too low or it may introduce undesirable effects.

LuxRender's metropolis sampler is based on Kelemen's paper "A Simple and Robust Mutation Strategy for the Metropolis Light Transport".

sobol

A progressive quasi-random "dumb" sampler. It distributes samples in an even but random fashion, using a sobol sequence. Sobol is the best choice when you want a "dumb" sampler, such as simple scenes. It also offers excellent performance with GPU rendering, where metropolis can be slow. Unlike the lowdiscrepancy sampler, sobol offers a continuous even distribution of pixels, regardless of how many samples per pixel are used, and can be stopped at any time. It is based on Blender Cycles' sobol sampler.

The sobol sampler does not have any parameters besides the noise-aware options.

lowdiscrepancy

The lowdiscrepancy sampler is a quasi-random "dumb" sampler It uses [0,2] quasi random sequences for all parts of the engine. That means it works in power of 2 sequences of samples. This can make it awkward to control, and can reduce quality when multiple sequences are used. The sobol sampler does not have these limitations and generally should be used instead.

Within the lowdiscreapancy sampler there are various pixel samplers, which controls the order in which the pixels are sampled. The default is "Vegas" but "hilbert" is very popular and the default for animation settings in luxblend. Valid values for pixelsampler are hilbert, linear, vegas, lowdiscrepancy, tile, and random. Vegas, lowdiscrepancy, and random all take samples from random points around the image then select new random points to sample. Hilbert, linear, and tile all take the complete number of samples before moving on to the next section, in other words your image will not be fully filled covered right away. The 'lowdiscrepancy' pixel sampler is recommended for progressive rendering, 'hilbert', 'linerar', and 'tile' is recommended for tiled rendering.

Parameters

  • Pixelsampler
    • The linear pixelsampler completes one row of pixels each one with [pixelsamples] number of samples taken before moving on. Then the next row is sampled and so on.
    • The tile pixelsampler renders a 32x32 pixel square using linear before moving on to the next square to the right, when the end of the row is reached it moves down to the next row of 32x32 pixel squares. Should perform slightly better than "linear" due to cache coherency.
    • The hilbert pixel sampler renders smaller squares of pixels and crawls around the image. It starts in the upper left, moves right, then down, once it is near the bottom it swings right again, then back up, once at the top it moves to the right again then down to the bottom and finishes the very bottom when it gets there. It's kind of a zigzag pattern that resembles the flight path of a drunken butterfly. Should perform slightly better than "tile" due to cache coherency.
  • Pixelsamples
    • The "pixelsamples" parameter controls how many samples the sampler will compute at once for a given pixel. Number of samples per pixel, per pass. The default value is 4. It must be a power of 2 (i.e. 2, 4, 8, 16...) otherwise it is rounded up to the nearest power of 2. So a pixelsamples setting of 42 will be automatically shifted internally to 64. The higher the value, the better the stratification and the quality, however it will increase the time per pass correspondingly.

random

The random sampler is the simplest sampler. It generates completely (pseudo-)random sample positions, resulting in low convergence speed. This sampler is only intended for testing and analysis purposes by developers.

Parameters

  • Pixelsampler
    • Which pixel sampler to use. Default is "vegas".
  • Pixelsamples
    • Number of samples per pixel, per pass. Default is 4. Higher values will improve quality, however it will increase the time per pass correspondingly.

erpt

The ERPT (Energy Redistribution Path Tracing) sampler is similar to the metropolis sampler and is based on an Energy Redistribution scheme. It mutates samples which show good contribution, but instead of randomly walking over samples returned, it keeps a pool of image space samples. These image space samples, called chains, are mutated a number of times before the pool is updated.

noise-aware and user-driven sampling

LuxRender's samplers (except for erpt) have an additional feature to help them "focus fire" in order to refine a render. This function works by applying adding an additional channel to the framebuffer that acts as a "heat map" of where the sampler should add rays. There are two ways this map can be defined, noise-aware sampling and a user-defined map (either a pre-loaded one, or with the refine area tool).

noise-aware sampling

The first method for generating the sampling map is to let LuxRender attempt to generate one itself based on the noisy regions of the image. This can be enabled with the "noise aware" option on the sampler. At a predetermined interval, LuxRender will evaluate the perceptual noise level in the rendered image. The map that is generated here will cause LuxRender to focus its samples on the areas it sees as being more noisy.

Note that noise-aware sampling may not always be effective with the bidirectional integrator, as it has no control over bidir's "light tracing" side. Noise that comes from variance in the light path (such as caustics or areas that are only lit indirectly) may not clear even if focused on.

user driven sampling

LuxRender also allows the user to define a sampling map themselves. You can pre-load one by supplying an OpenEXR file (of the same dimensions as the render) for the "user sampling map" parameter. Additionally, the LuxRender GUI has a tool for painting a map over your render as it runs, found in the "refine area" tab. User-driven sampling does not require the "use noise aware" option to be enabled, but if both are enabled, a combination of the noise-aware map and the user-set map are used. See Refine Brush for more info on the refine-area brush.

Note that neither map replaces the normal behavior of the sampler entirely. For example, the metropolis sampler with noise-aware enabled will "steer" using a combination of a pixel's brightness and noise level.

Surface Integrator

Surface integrators are central to the rendering process; they construct paths between light sources and the camera surface and calculate the incoming light intensity. The choice of the best integrator depends on the type of scene that needs to be rendered - for example, interiors usually benefit from a different integrator than exteriors.

bidirectional

The bidirectional integrator works by tracing rays both from the light towards the camera ("light path") and from the camera towards the light ("eye path"), hence the name. After it has generated a path in each direction, it will form new paths by trying all possible connections between the two original paths. In other words, it looks for a place where an eye path hit something within a line of sight of a light path hit point. This means that it is able to overcome the major problem with regular path tracing: finding the light sources.

The bidirectional integrator is unbiased, and considers all types of light interactions. It is suitable for interior renderings and other scenes with "difficult" lighting.

The bidirectional integrator a good default choice if you are unsure which integrator is appropriate.

Bidirectional integrator schematic

Parameters

  • Eye Depth
    • Maximum number of bounces of the eye path. The higher the value, the more complicated indirect lighting will be considered, however at a cost in speed (how much depends on the sampler used). If this value is set very low, the result may be biased.
  • Light Depth
    • Maximum number of bounces of the light path. As with the eye depth, the higher the value, the more complicated indirect lighting will be considered, however at a cost in speed (how much depends on the sampler used). If this value is set very low, the result may be biased. If your scene contains mostly direct lighting, you may be able get a small speed boost without any detrimental effects by setting this lower than the eye depth. In most other cases, eye depth and light depth should be equal
  • Light Ray Count
    • Number of light rays traced per sampled lamp. This control acts a multiplier on the effect of light path strategy (see below). Once a lamp gets ready to fire, it will launch a number of light rays equal to the value set here. For example, if you set it to 3, and are using the "all" light path strategy, each lamp will start 3 light paths. Higher numbers of light rays shift the contribution away from the eye paths, which can signifcantly speed up situations that light tracing is more effective for, such as caustics and indirect light. The light rays cannot solve reflections or refractions, however. These require eye rays, so setting this value too high can cause excessive noise in surfaces requiring many eye rays, such as rough metals. Also, all things being equal, eye paths are more useful since they allow adaptive sampling and are always cast in a useful direction. As a result, raising light ray count above 1 can do more harm than good in some cases. You should experiment with several small values (such as 1, 3, and 5) while rendering for a fixed amount of time to determine the best value for your scene.
  • Russian Roulette
    • Determines the Russian Roulette strategy. The Russian Roulette technique is used to avoid spending time considering "unimportant" paths. This is done by terminating a path at an earlier stage. The default is efficiency, which tries to optimize the overall efficiency by considering past sample values. The alternative strategy is probability which will determine at each bounce if the path should be continued based on a fixed probability. This will significantly speed up rendering but may introduce more noise in the result. If set to none paths will be terminated based only on the bounces parameter. This is very slow and not recommended (most paths are terminated quickly, only a few need to run all the way).
  • Light path strategy
    • This option determines the strategy that will be used to decide which lamps start a light path. It is similar to the normal light strategy option used for shadow rays. (The bidirectional integrator uses this option as well to configure the eye path. Most of the time, you will want to use the same strategy for both)
    • Auto
      • Uses the "all" strategy if there are fewer than 5 lights in the scene, otherwise it will use the "one" strategy
    • One
      • Chooses a single light at random to start the light path.
    • All
      • All lights will start a light path.
    • Importance
      • Chooses a single lamp, based on the "importance" value set by the user. By giving a light an importance value of 0, you can prevent it from starting light paths. For dim lights that have little effect on scene illumination - such as indicator LEDs on a device in a well-lit room - this can improve performance significantly.
    • Power
      • This strategy functions similarly to Importance, but it also takes into account the power of the light. The final probability of a light being chosen is its power multiplied by its importance
    • All Power
      • Starts a number of light paths equal to the number of lamps, and distributes them according to power and importance
    • Auto Power
      • Uses the "all power" strategy if there are fewer than 5 lights in the scene, otherwise it will use the "power" strategy. A power/importance version of the basic "auto" strategy
    • Log Power
      • Modified version of Power that uses the logarithm of a lamp's power.

path

The path integrator uses standard path tracing. It will shoot rays from the eye (camera) into the scene, and will continue reflecting or refracting the ray off objects until it finds a light or the search is terminated. Like the bidirectional integrator, path considers all kinds of reflections, not just specular ones.

The path integrator is unbiased, and suitable for exterior renderings and reference renders. It has trouble dealing with complex lighting found in many interiors, and as a result is usually slower than bidirectional for interior renderings, and even some exterior renderings.

Parameters

  • Bounces
    • Maximum number of bounces that a ray is reflected or refracted before the path is terminated. The higher the value, the more complicated indirect lighting will be considered, however at a cost in speed (how much depends on the sampler used). If this value is set very low, the result may be biased.
  • Include Environment
    • If this flag is enabled, environment light sources will be visible via specular reflections. Default is on.
  • Direct light sampling
    • If this flag is enabled, the path integrator will perform direct light sampling at each intersection. If this flag is disabled, it will only use brute force path tracing. Disabling this flag is generally only useful on scenes that only use an HDRI for illumination.
  • Strategy
    • Which strategy the integrator should use when sampling light sources. Selecting one will cause it to pick a single light source at each vertex, while all will make it sample all the light sources in the scene at each vertex. The default is auto, which will pick the optimal strategy depending on the number of lights in the scene.
  • Russian Roulette
    • Determines the Russian Roulette strategy. The Russian Roulette technique is used to avoid spending time considering "unimportant" paths. This is done by terminating a path at an earlier stage. The default is efficiency, which tries to optimize the overall efficiency by considering past sample values. The alternative strategy is probability which will determine at each bounce if the path should be continued based on a fixed probability. This will significantly speed up rendering but may introduce more noise in the result. If set to none paths will be terminated based only on the bounces parameter.

ex photon map

A spectral photon mapping integrator. In the first pass, it will cast rays from the light source and generate a photon map. In a second pass, it will render the map with direct lighting or path tracing.

It's a good speed/quality rendering method, and is recommended if bidirectional is too slow for a particular job. (Such as animations, where the long, unpredictable render times of bidirectional are problematic) For more information, see Intro to ExPhotonMap.

Note that ExPhotonMap does not use an irradiance cache for final gathering, and as such may be slower than some other photon-map based renderers.

Parameters

  • Direct photons
    • The target number of direct light photons.
    • These are deposited where a light ray hits a surface directly after leaving the light source. You will need a lot of these, but they generally fill pretty quick. The default 1 million is fine in most cases.
  • Indirect photons
    • The target number of photons for soft indirect light.
    • These are deposited from light rays that bounced off another surface prior to landing. This map is the one that your global illumination solution will be computed from, for the most part. For most scenes 500,000-1,000,000 is recommended, although some scenes don't need that many and others need more. Light rays that bounced from a very glossy surface are considered reflection caustics, and are dealt with by the caustic photon map.
  • Caustic photons
    • Target number of photons for reflection and refraction caustics
    • These are left by light rays that passed through a glass material or bounced off a highly glossy surface prior to landing. In most cases, these are not common paths and hence they will fill slowly. Accurate caustics can still take over 1,000,000 caustic photons though, so be careful with the glass materials. If your scene contains no glass and mostly dull materials, this map will not be able to fill and LuxRender should give up on it after 15 seconds or so. You can avoid this and shut off caustics entirely by setting this value to 0. Note that if you set 0 caustic photons and you do have glass in your scene, it will cast a solid shadow.
  • Radiance photons
    • Target number of final gather photons
    • A special photon map storing the outgoing radiance (surface brightness) of a point, used to shortcut final gather calculation.
  • Number of Photons Used/Max Photon Distance
    • These two work together, to help focus the search for photons when a ray from the camera hits something. Number of photons used is the maximum number of photons it will take to estimate the light contribution from that ray. Photons will only be taken into account if they are closer to the hit than the value set for Max photon distance. You can usually leave these settings at their default value.
  • Max depths
    • These are the maximum recursion depths for the photon pass and the rendering pass. They work pretty much the same as for any other integrator.
  • Final Gathering/Gather Angle/Samples
    • Final gathering will make LuxRender perform a more thorough (though also more biased) rendering pass. Instead of using the photon map directly, it will at each camera ray hit point shoot a set of secondary rays. It will then use the radiance photon map at these secondary hit points to estimate how much light reaches the camera ray's hit point, and use this to determine the overall light contribution for that camera ray.
    • "Gather angle" deals with the problem that some photon contributions at those secondary hit points could in fact be impossible: the photons could be bouncing off in a completely different direction than towards the camera hit point. In order to filter out those photons they will be rejected if the angle between their direction and the direction to the camera hit point is too large. The "gather angle" parameter defines this value. 5-10 degrees is a good starting point. Wider angles can speed up rendering by allowing fewer final gather samples, but increase bias: it will make some surfaces strangely (and incorrectly) bright. Too low values may reject too many photons leading to uneven lighting.
    • The last final gathering parameter, samples, is simply the number of secondary rays that will be fired at each intersection. The default is 32, which works well most of the time, although many scenes can get by with less, which will increase render speed substantially. If you increase the number of gather samples substantially you could decrease the gather angle somewhat in order to get a less blurry estimation. Gather angle vs samples is something of a balance between accuracy and speed, it can be adjusted how you like.
  • Rendering mode
    • This is the mode to use during the eye pass. It's usually best to stick with direct and let the photon maps handle caustics and GI, but you can use path tracing too if you so desire.
  • Russian Roulette
    • Determines the Russian Roulette strategy. The Russian Roulette technique is used to avoid spending time considering "unimportant" paths. This is done by terminating a path at an earlier stage. The default is efficiency, which tries to optimize the overall efficiency by considering past sample values. The alternative strategy is probability which will determine at each bounce if the path should be continued based on a fixed probability. This will significantly speed up rendering but may introduce more noise in the result. If set to none paths will be terminated based only on the bounces parameter.

directlighting

The directlighting integrator only covers light that shines on a surface directly (or via mirror and glass surfaces) - diffuse or glossy reflection between surfaces is ignored. Hence, the resulting image will not be very realistic.

This integrator constitutes the "classic" raytracing algorithm (Whitted) and is very fast, but only suitable for quick previews.

Parameters

  • Max-depth
    • Maximum number of specular reflections or refractions to perform before terminating a path. Default is 8. Higher values significantly slows down the integrator, but will show more inter-reflections/refractions.

sppm

Experimental stochastic progressive photon mapping integrator. It will perform a series of passes where it first shoots rays from the camera, stores their positions (called "hitpoints"), then fires rays from the lights to see which hitpoints are illuminated. This process is repeated to progressively refine the image. For more information, see the SPPM page.

Parameters

  • Max eye depth
    • Maximum recursion depth for eye-ray/hitpoint tracing. These rays will continue until they hit a diffuse surface (and are thus able to store a hitpoint) or they reach this depth.
  • Max photon depth
    • Maximum recursion depth for light/photon tracing.
  • Start radius
    • The search radius used during the first pass. The radius will be smaller on subsequent passes
  • Alpha
    • The rate at which the radius shrinks. On each pass, the previous radius is multiplied by this value. Using alpha=1.0 will disable radius reduction, although this is not recommended
  • Include Environment
    • If this flag is enabled, environment light sources will be visible via specular reflections. Default is on.
  • Direct light sampling
    • If this flag is enabled, a shadow-ray test will be performed when a hitpoint is stored. If this flag is disabled, the test will not be performed and LuxRender will rely entirely on the photon pass for all lighting. Disabling this option is generally only useful on scenes where all lights are enclosed by geometry.
  • Store glossy
    • Force use of the photon map for glossy surfaces. This can improve performance, but can often leave weird artifacts.
  • Wavelength stratification passes
    • Number of initial passes using photons of pre-defined wavelength. This will help SPPM give even colors to start out with, at the cost of a very small bias.
  • Lookup accelerator
    • Structure used to store the hitpoints during the photon pass. Hybrid hash grid offers the best performance, but also the highest memory consumption. Parrallel hash grid offers fairly good performance with less memory useage. It also has a paramter, parallel hash grid spare, to adjust this; higher values are faster but use more memory
  • Pixel sampler
    • Sampling pattern used during the eye pass. The default is a hilbert curve, which works best for most scenes. Tiles, scanning down in strips (linear) and random sampling are also available
  • Photon sampler
    • Controls whether to use a simply permuted halton sequence or the experimental adaptive markov chain monte carlo sampling for photons. The latter is a work in progress and may not function correctly. The default is halton.

distributed path

The distributed path tracer is an extension of the regular path tracer. Instead of selecting a single reflection direction, it will select multiple directions and spawn additional rays along each direction. The number of rays to spawn is configurable per material type (diffuse, specular and glossy). It also features noise rejection techniques, such as discarding very bright sample values (which could lead to very bright pixels). However, due to the number of parameters, it can be quite difficult to adjust properly.

The distributed integrator is mainly meant to be used with animations, where noise control and predictable rendering times is essential. While it is unbiased in theory, typical parameter settings will cause it to be fairly biased.

For an in-depth description of its parameters, see this page: http://www.luxrender.net/wiki/Distributed_Path

instant global illumination (igi)

Experimental "Instant Global Illumination" integrator. It will automatically place "virtual point lights" at places it thinks should have indirect illumination. It is somewhat like an automatic version of using extra lamps to fake global illumniation, then rendering with classic raytracing.

Light Strategy

The light sampling strategy determines show LuxRender decides which lights should be checked with "shadow rays" (a test ray fired at a lamp to see if the point the current ray hit is illuminated by said lamp). The bidirectional integrator uses this option for the eye path, and has another control to set the strategy for the light path.

  • Auto
    • Uses the "one" strategy if there are more than 5 lights in the scene, otherwise it will use the "all" strategy
  • One
    • Chooses a single light at random to send a shadow ray to.
  • All
    • A shadow ray will be sent to each lamp.
  • Importance
    • Chooses a single lamp, based on the "importance" value set by the user. By giving a light an importance value of 0, you can prevent it from being sampled except when a ray from the camera hits it directly. For dim lights that have little effect on scene illumination - such as indicator LEDs on a device in a well-lit room - this can improve performance significantly.
  • Power
    • This strategy functions similarly to Importance, but it also takes into account the power of the light. The final probability of a light being chosen is its power multiplied by its importance
  • All Power
    • Starts a number of shadow rays equal to the number of lamps, and distributes them according to power and importance
  • Auto Power
    • Uses the "power" strategy if there are more than 5 lights in the scene, otherwise it will use the "all power" strategy. A power/importance version of the basic "auto" strategy.
  • Log Power
    • Modified version of Power that uses the logarithm of a lamp's power.

Volume Integrator

The volume integrator handles calculation of light paths through volumes. The best choice will depend on the contents of your scene.

multi

The multi volume integrator allows a ray to scatter as many times as it needs to until the ray is terminated by the surface integrator. This behavior can be slow, but is necessary for heavy-scattering effects such as SSS. Multi offers the best quality, but can result in unreasonably poor performance if a large portion of your image contains scattering volumes (such as fog).

single

The single volume integrator allows only single scattering for volume calculations. This means a ray will scatter once in a given volume, and then no more. This is a useful shortcut for atmospheric effects, since these are normally lightly-scattering volumes that cover the entire scene, and can be very slow to calculate with multi.

emission

Emission is the simplest volume integrator, it will calculate only absorption and emission. It does not calculate scattering. If you are using the "homogeneous" or "heterogeneous" mediums, you will need to use a different volume integrator to get results from it.

Filter

While calculating light samples, LuxRender treats the camera surface as a continuous surface - it does not yet take the amount of pixels of the final image into account. Once a certain number of samples has been calculated, it is necessary to decide for all samples to which pixel on the rendering they contribute. This step is executed by the filter.

Typically, a sample contributes to multiple pixels, with most of it contribution being added to the pixel on which it is located and smaller amounts going to neighbouring pixels. Different filters differ in the exact distribution of a sample's light contribution and furthermore filters have a setting that defines the size of the total area over which the sample's contribution is spread.

comparison of filters: sinc, Mitchell, box and Gaussian


Choosing the right filter influences the sharpness and smoothness of the rendering, although the difference between various filters is subtle. Differences in rendering time are negligible. Using the Mitchell filter with default settings is generally a good and safe choice. However since it has negative lobes (the filter contains negative values), it may produce artifacts if the scene contains small but strong reflections light sources. In this case the gaussian filter may be a better choice.

Mitchell

Mitchell filter


The dark edge around the spectral reflection of the area light in the scene is caused by the negative lobes of the Mitchell filter.


Gaussian

Gaussian filter


Though not as sharp as the Mitchell or sinc filters, the Gaussian filter is free of the artifacting that surrounds bright spectral areas of an image that plagues these other two filters.


box

box filter


The box filter gives quite noisy and unsharp result and is therefore not recommended for general use.

sinc

sinc filter


The pair of dark and bright rings around the spectral reflection of the area light in the scene is caused by the negative and positive lobes of the sinc filter.


triangle

triangle filter


Accelerator

The accelerator is used to figure out which objects do not need to be taken into account for the calculation of a ray. It is a way of "compiling" the scene into a format that can be rendered faster.

QBVH

This accelerator is a modified bounding volume hierarchy accelerator that has 4 children per node instead of two and uses SSE instructions to traverse the tree. It uses much less memory than a kd-tree while providing an equivalent or better speed. In LuxRender, QBVH has much better SSE-optimization than KD-Tree, and as a result will be faster in almost all cases.

If you aren't sure which accelerator to use, QBVH is probably the best choice.

SQBVH

Varient of QBVH with spatial-split support. SQBVH offers better performance, but uses more memory and takes longer to build. It has the same parameters as QBVH

kd-tree

A 3-dimensional kd-tree. The first split (red) cuts the root cell (white) into two subcells, each of which is then split (green) into two subcells. Finally, each of those four is split (blue) into two subcells. Since there is no more splitting, the final eight are called leaf cells. The yellow spheres represent the tree vertices.


Also known as "tabreckdtree", this accelerator is fairly fast, but is more memory-hungry than QBVH and not as well SSE-optimized.

A kd-tree uses only splitting planes that are perpendicular to one of the coordinate system axes. This differs from BSP trees, in which arbitrary splitting planes can be used. In addition, in the typical definition every node of a kd-tree, from the root to the leaves, stores a point.[1] This differs from BSP trees, in which leaves are typically the only nodes that contain points (or other geometric primitives). As a consequence, each splitting plane must go through one of the points in the kd-tree. kd-tries are a variant that store data only in leaf nodes. It is worth noting that in an alternative definition of kd-tree the points are stored in its leaf nodes only, although each splitting plane still goes through one of the points.


none

It is possible to not use an accelerator, and simply brute-force the scene. This is not recommended in actual production use.

Speeding Up LuxRender

One of the biggest mistakes new users make with any global illumination renderer is trying to rely too much on indirect light. While Lux can solve most indirect lighting situations eventually, direct light is always faster. A good habit to get into is to use the direct lighting integrator when setting up your scene lighting. As a general rule of thumb, if your scene lighting looks good without global illumination, it will look great and render quickly when you switch to a GI-capable integrator (such as bidirectional).


There are also some steps you can take to optimize the scene to make it a bit faster:

First, keep reflectance (i.e. brightness) in diffuse components of your materials below 0.8 or so. This will allow a ray to "use up" its energy so Lux can be done with it sooner, and it will also help your scene to de-noise faster. On the same note, avoid using specular colors higher than .25 (or much lower, 0.02-0.05 is a good range for most everyday objects). Reflection color on metal materials should be kept below .8 as well. If this makes your scene too dim, simply adjust the tonemapping to expose it more.

Second, limit the number of faces on objects that are used as meshlights. Each face in a mesh light is a light itself that must be sampled, so keep them as simple as possible. If you have a densely-tesselated, but dim meshlight, you can also use the "power" or "importance" light strategies to avoid sampling the large number of faces. This will remove the performance hit of a dense meshlight, but can produce strange results if the light significantly contributes to the scene illumination. This trick works best for dimly-glowing objects, such as indicator LEDs, bioluminescent creatures, and so forth.

Third, homogenous volumes are much slower than "clear" type volumes or no volumetrics at all, so SSS and atmospheric effects should be use sparingly unless you are ready for a very long render. Also, consider using the "single" volume integrator when using atmospheric scattering.

Fourth, procedural textures and microdisplacement add calculations every time a ray intersects them, so don't get carried away with them.

Finally, if you aren't sure what settings to use, use metropolis sampler, bidirectional path tracing, the mitchell filter, and the QBVH accelerator. This should give a clean, artifact-free image with reasonable speed.

If you are doing test renders, lowering resolution, rendering only a portion of the frame, or using a simpler surface integrator (such as direct) can give useful results with much less waiting.

Russian Roulette Explained

By default, LuxRender uses a technique called Russian Roulette (RR). The Russian Roulette technique is a way to reduce the average depth of a ray (number of bounces) in an unbiased way. Usually the main contributions happen at the first few bounces of a ray, so going all the way to 20 bounces will usually not contribute significantly, but will take 4x the time of just 5 bounces.

The Russian Roulette technique comes in two modes: probability and efficiency. The former uses a fixed probability to terminate a path for each bounce. The default however is efficiency, which takes into consideration how much light there is to be gained by going one step further. This usually reduces noise a lot better than the probability mode, however it is then material dependent. If the material is entirely white, then there's a 100% probability of it continuing to bounce, since the material will reflect all the light from that bounce and thus that extra bounce will contribute 100%.

So, if you're using Russian Roulette in efficiency mode, which is the default, then the darker the material, the shorter the average depth, and thus the quicker it can start on a new sample.

In addition, the lower average depth helps to prevent fireflies since low probability events that are accepted will have to be scaled up to compensate for all the other low probability events that wasn't accepted.