LuxRender Render Einstellungen - LuxRender Wiki
Luxrender GPL Physically Based Renderer

LuxRender Render Einstellungen

Personal tools

From LuxRender Wiki

Jump to: navigation, search

This page contains an overview of the inner workings of the rendering engine and expands on the use of various variables. Although it may be interesting and useful for advanced users, the information presented here is not strictly necessary to generate nice looking renderings - usually using presets results in a sound selection of settings.


Introduction to LuxRender's rendering process

The goal of the rendering process is to create an image that is very close to how a scene would look in the real world. In order to achieve this, the way light behaves should be replicated. Luckily, light has been studied rather well and there is even a known formula that describes the behaviour of light pretty accurately. Calculating the solution to this formula is what LuxRender does.

The way LuxRender approaches the process is by painstakingly calculating the illumination values at huge numbers of points on the camera surface - one by one. The principle of finding the illumination value for a certain point can be depicted like this:

schematic view of the rendering process: 1 = camera surface, 2 = camera, 3 = scene geometry, 4 = light source, 5 = path

The first step in the process is to define points on the camera surface for which the light intensity should be calculated. This task is handled by the sampler.

Once a point is chosen, a surface integrator constructs a ray between a light source and the camera surface, taking the characteristics of the camera into account. The ray may be a direct straight line, but more typically a ray is reflected by multiple surfaces before hitting the camera plane.

The calculation of the reflection direction of a ray on the surface of an object is complicated somewhat by the fact that light is typically scattered in various directions when reflecting on a surface. Instead of splitting up the light beam into multiple directions, a single direction is chosen, based on the surface material's reflection properties and probability. Based on the light source brightness and material properties, the integrator calculates the resulting light intensity on the camera surface.

After the surface integrator has done its work, the volume integrator will calculate the effect of participating media (such as smoke) and adjust the calculated light intensity for this effect. This results in the final light intensity on the desired point of the camera surface, to which we will refer as sample.

After a number of samples has been calculated, an image needs to be generated. The filter decides to which pixels a calculated sample contributes. The tone mapping process converts calculated light intensities to colour values of pixels.


As mentioned above, the sampler is responsible for deciding the location on the camera surface where samples are to be calculated. In order to avoid aliasing and other artifacts, the number of samples is typically a lot higher than the number of pixels of the final image. A good sampler will distribute the sample locations evenly, while avoiding to form predictable patterns.

LuxRender's samplers can be divided into two categories: the "dumb" ones that generate sample locations without looking at the resulting sample values, and the "intelligent" ones that base the position of new sampling locations on the values of the samples that have already been calculated.

As "dumb" samplers don't analyse the values of calculated samples, they are a bit faster than the intelligent ones. However, on complex scenes this advantage gets lost as the rendering itself will be less efficient, in particular if there are many light sources or specular materials and caustics. Therefore -except for animations and really simple scenes or quick previews- using an "intelligent" sampler is recommended.


The lowdiscrepancy sampler is a "dumb" sampler that uses [0,2] quasi random sequences for all parts of the engine. It is currently (v0.5) the fastest dumb sampler and is the best option for most users who want to use a dumb sampler. Most of the configured presets in v0.5 exporters use this sampler.

The lowdiscrepancy sampler can use various pixel samplers, which control they way your image is sampled during rendering. The 'lowdiscrepancy' is recommended for progressive rendering, 'tile' is recommended for tiled rendering.


The halton sampler is a "dumb" sampler that uses Halton-Zaremba quasi random sequences. It provides a better sample pattern than the 'lowdiscrepancy' sampler, but is considerably slower. Using it with the 'lowdiscrepancy' pixel sampler is recommended for progressive rendering, 'tile' is recommended for tiled rendering.


The random sampler is the most simple sampler. It generates random (or actually pseudo-random) sample positions, resulting in very low convergence speed. This sampler is only intended for testing and analysis purposes by developers.


The metropolis sampler is an intelligent sampler that uses the Metropolis-Hastings algorithm and implements Metropolis Light Transport (MLT). It generates pseudo-random sample values at each iteration and based upon feedback data (the brightness of calculated samples) it will decide if it is worthwhile to explore the previous sample's surroundings. If so, it will do so by mutating (offsetting) the sample data of the previous iteration.

This is the recommended sampler for most scenarios.


The ERPT (Energy Redistribution Path Tracing) sampler is similar to the metropolis sampler and is based on an Energy Redistribution scheme. It mutates samples which show good contribution, but instead of randomly walking over samples returned, it keeps a pool of image space samples. These image space samples -called chains- are mutated a number of times before the pool is updated.

Surface Integrators

Surface integrator are the core of the program; they construct paths between light sources and the camera surface and calculate the incoming light intensity. Although all surface integrators should end up producing the same rendered image, there are big differences in rendering speed. The choice of the best integrator depends on the type of scene that needs to be rendered - for example, interiors would benefit from a different integrator than exteriors.


The directlighting integrator only covers light that shines on a surface directly - any reflections between surfaces are ignored. Hence, the result of the rendering will not be very realistic. This integrator is very fast, but only suitable for previews.


This integrator uses standard path tracing and is suitable for exterior renderings and reference renders.




Bidirectional integrator schematic



ParticleTracing integrator schematic




While calculating light samples, LuxRender treats the camera surface as a continuous surface - it does not yet take the amount of pixels of the final image into account. Once a certain number of samples has been calculated, it is necessary to decide for all samples to which pixel on the rendering they contribute. This step is executed by the filter.

Typically, a sample contributes to multiple pixels, with most of it contribution being added to the pixel on which it is located and smaller amounts going to neighbouring pixels. Different filters differ in the exact distribution of a sample's light contribution and furthermore filters have a setting that defines the size of the total area over which the sample's contribution is spread.

comparison of filters: Sinc, Mitchell, Box and Gaussian

Choosing the right filter influences the sharpness and smoothness of the rendering, although the difference between various filters is subtle. Differences in rendering time are negligible. Using the Mitchell filter with default settings is generally a good and safe choice.


box filter

The box filter gives quite noisy and unsharp result and is therefore not recommended for general use.


Gaussian filter


Mitchell filter



triangle filter


figures out which objects do not need to be taken into account for the calculation of a ray


A 3-dimensional kd-tree. The first split (red) cuts the root cell (white) into two subcells, each of which is then split (green) into two subcells. Finally, each of those four is split (blue) into two subcells. Since there is no more splitting, the final eight are called leaf cells. The yellow spheres represent the tree vertices.

A kd-tree uses only splitting planes that are perpendicular to one of the coordinate system axes. This differs from BSP trees, in which arbitrary splitting planes can be used. In addition, in the typical definition every node of a kd-tree, from the root to the leaves, stores a point.[1] This differs from BSP trees, in which leaves are typically the only nodes that contain points (or other geometric primitives). As a consequence, each splitting plane must go through one of the points in the kd-tree. kd-tries are a variant that store data only in leaf nodes. It is worth noting that in an alternative definition of kd-tree the points are stored in its leaf nodes only, although each splitting plane still goes through one of the points.


A 3-dimensional Regular grid.