From LuxRender Wiki
Rendering an Animation with LuxRender
Strictly speaking, an animation is nothing more than a series of still images. So one can simply render individual images as usual and string them together to make an animation. Since this is not feasible in an actual animation pipeline however, there are a few more concerns in play.
First of all, you don't have one image, you have hundreds to render. So render time needs to be kept down. At first, this leads to a thought that if LuxRender takes all night to render a still at 1920x1080, wouldn't it take 8 day straight to render 1 second of 1080p24 footage? Not quite. When using physically based renderers for animation, there is a point to remember. While there are hundreds of images to be done, each one will only be shown for a fraction of a second. So as long as you are careful, you can sneak by with a fraction of the samples.
A still render is meant for that one image to be stared at. Whether it is a rendering of a new building for a construction proposal, a product visualization render in a brochure, a fantasy illustration for an album cover, or just a cool desktop wallpaper of a kitchen scene. Any single imperfection can be noticed, so it is often necessary to render images to thousands of samples per pixel to ensure ALL the noise is gone. LuxRender's metropolis sampler can help here, by "steering" the render into the light, it helps clear the artifacts with fewer samples.
This is not the case with animations. The individual frames are supposed to blend together, that is the entire goal. Thus, no one is going to notice a fine grain on the image. All digital cameras produce some grain on the image too, and no one has been bothered so far, as long as it isn't horribly strong. In fact, if you are careful, you can render an animation frame with only a fraction of the samples per pixel you would use for as still rendering. As long as the noise stays even, you can get away with having quite a lot of it. It is important that the noise be an even, high-frequency grain. Low-frequency noise from photon tracing algorithms or "fireflies" are both problems, so one must take care to avoid them.
LuxRender has its own queuing and network rendering system that can be used to manage frames for an animation without any extra software. Of course, if you prefer to have your exporter and queue manager control the render, most LuxRender exporters can do this as well. Check your exporter manual for information on using it for batch renders. The "managing frames and render nodes" section assumes you are using LuxRender's built in functionality.
Note: You should familiarize yourself with LuxRender's render settings before continuing.
The metropolis sampler will probably not help you. While it will improve convergence speed in most cases, it does not distribute ray samples evenly across the image. The result is uneven noise and clumps of fireflies when used with low samples per pixel (spp). Fireflies will seem to "flash" when they are in an animation frame, and the uneven grain will "swim" across the image. Both of these are distracting, so it is better to use the sobol sampler instead.
The sobol sampler uses a progressive QMC pattern and will give even sampling for any number of samples per pixel. It is less effective at rendering difficult light paths, so some effort should be taken to keep light conditions simple.
Since LuxRender is a progressive renderer, some stop condition needs to be used when batch rendering. In most cases, the halt-threshold setting should be used for this. Halt-threshold will stop the render once a certain quality level is met. This is defined as a percentage of pixels that have converged and are no longer updating. For example, a threshold of .001 represents 99.9% convergence, and is a good starting point for tests. Some tools refer to this quality in a log format for ease of use. .001 = 3.0, .00001 = 5.0, and so on. While some simpler images may be clean at quality 3.0 (.001), higher settings such as 5-6 (~.000001) are needed. Enabling noise-aware sampling will let Lux adaptively sample the regions that are still failing the test.
The standard recommendation of bidirectional or exphotonmap applies here as well. Exphotonmap is considerably faster at getting noise free images, so you may find it useful for getting clean frames in a reasonable amount of time. However, it can also leave behind low-frequency noise (flickering) which can be quite obvious and distracting in an animation.
If your shot has very simple lighting, you may find the path integrator with a very low maxdepth setting (such as 2 or 3) to be very useful. The distributed path integrator can offer extra control over sampling and depth, and can be quite useful for quick, simple images.
It's best to use the Linear tonemapper for animations. The other tonemappers will auto-adjust to the brightness of each frame. This is kind of like auto-exposure that recalculates for each frame of the video. The result is a flickering look to the animation. Linear has a fixed exposure, and will avoid this problem.
The outlier rejection feature in LuxRender can help avoid the occasional firefly from flashing by in your animation by preventing it from being added to the film. For animations, you should enable it with a fairly high strength. (5-10)
It's possible to reduce apparent noise by making the noise pattern match between frames. This happens when you disable random seed values for LuxRender's rendering threads. To enable this mode, start LuxRender with the --fixedseed flag. Your exporter may have an option to have it call Lux with this flag.
Managing Frames and Render Nodes
Using the LuxRender GUI
The LuxRender GUI includes a rendering queue feature. You can add all the frames in your animation to this queue, and LuxRender will run through them all in order. You can add network nodes, and they will be used to assist with each frame. All the nodes will work on the same frame together, since LuxRender currently does not support distributing individual queue items. When each frame reaches the stopping point, LuxRender will pull in the samples from all network nodes one final time, and write the end result to disk, then move on to the next frame.
If you are using the low discrepancy sampler and trying to get things done in a few big passes, as mentioned above, this can make things tricky if you have render machines of varying performance. If all the nodes are about the same, you can just have them do one pass each and combine the results.
But, what if, for example, your master machine can do 2 passes in the time it takes your slave node to do 1? In this case, you can set haltspp to stop after 2 passes, and the final pass from the network node will be added in at the end, giving you 3 passes total worth of samples.
On non-Windows systems, it is possible to pass multiple files to LuxConsole by specifying that it should render the file "*.lxs" or similar. You can use this to achieve a similar functionality as with the GUI queue. LuxConsole will also work with network nodes, which you can specify with the -u (useserver) command.
Let's say, for example, you have 4 render nodes, at the IP addresses 192.168.0.105, 192.168.0.106, 192.168.0.107, and 192.168.0.108. You have loaded all the scene files for the animation into the folder /Shared/render_out. You want 192.168.0.105 to be the master. You can start slaves on the other machines, then SSH into 192.168.0.105 and execute this command:
./luxconsole /Shared/render_out/*.lxs -u 192.168.0.106 -u 192.168.0.107 -u 192.168.0.108