How i usually use the shift camera,is i place my camera around eye level of a person, facing perfectly horizontal (eg it's pointing to the horizon, it's not pointing up or down)
Then i adjust the ShiftY parameter, i usually add a few meters to that value until the person's eyes should have been as high upwards as the middle floor of the building,
and as the end result i can see the whole building with my eyes at the level of the middle floor, but with a parallel vertical projection.
watts = the amount of electrical energy you send into the lamp,
efficacy = the amount of the electrical energy that is converted into photons, eg light. (most types of lamps actually only convert a small amount of it into light, and the rest becomes heat)
read more about it here:
Tuning lamps in luxrender is only done to tune the intensities relative to eachother,
you must take into account that the tonemapping of the displayed image automatically adapts to the light in your scene (like the pupil in your eye).
if you have only 1 lamp, increasing or decreasing watt/efficacy will not show any difference as the tonemapper will adapt to normalize the level.
as such you change these intensities only when having more than 1 lamp to have differences in intensity between lamps.
To tune the overal power of 1 or more lamps in your image, after the tonemapping/normalization, you need to tweak the post_scale and burn parameters of the reinhard (default) tonemapping parameters. You can do this in luxblend in the film tab, which is quite unhandy, or you can do it in real-time while rendering in the GUI, which is what i would recommend. (sliders on the left)
The wattage should be the power drawn by the light bulb, the efficacy is in lm/W. So the product of both gives you the luminous power of the lamp in lumen.
The crash with the win32 version could indeed be due to not having enough memory, it can only use 1.5GB of memory due to 32bit process space limitations, which are tricky when dealing with large format renders as lux uses quite a bit of memory for images internally (they are 32bits per channel).
You can use the 'shinymetal' material to make custom metals
It's because you're values are not right...
You're displacing 2.4 meters (all units are in meters in lux),
and due to the small contrast / difference (0.0 and 0.1) you're displacement map displaces your whole surface 2.4 meters upwards with a small displacement difference...
Set your dismap value to something like 0.2 or 0.02, and your tex1 to 0.0 and tex2 to 1.0
rotation is to rotate the sky around, which in normal circumstances you won't need to use.
gain is to change the power/strength of the sky/sun light in case you want to tweak it in relation to other lightsources.
relsize is to change the size of the sun which you also should'nt have to tweak...
You will need a recent cvs build or weekly windows build (both the gui and console downloads) (from the weekly builds forum on this forum),
and also provide the path to 'luxconsole' path. (the command line binary in the console download)
This is the location of the luxconsole.exe which is in the console download.
You configure these on your system panel of luxblend.
I think the issue here is that you're selecting objects, configuring a material on them in luxblend, but you're object don't have a blender material assigned, so you're not configuring anything basically...
You need to assign a new material in blender, using blender's material UI, and give it a name, otherwise luxblend won't pick it up.
You do not change the material's properties in blender, this you do in luxblend, but you have to assign and name them first in blender's UI.
Here's the simplest procedure:
Start a render with 'write/resume FLM' button activated in luxblend,
and lux will periodically write the FLM image as you mentioned.
Close lux, and when you want to resume rendering just start lux again with the same scenefile you used before. (eg not the FLM file as this is not a scene file), and lux should find the FLM file from before and resume rendering on it.
As jeanphi mentioned, this feature was broken in v0.5 release for windows platforms, so you need to use a newer weekly build from the weekly builds forum here.
0 = completely rough (don't use, use 'matte' material instead)
100 = very rough
250 = fairly rough
500 = medium
1000 = high
10000 = shiny
100000 = very shiny
1. use a png image instead of .tga and see if there's any change.
2. Play with the 'gamma' parameter in luxblend on the channel where you add the alpha mask. make it higher and see if it dissapears. (try 1.3 or 1.7 or 2.2 or so)
if you find anything else that needs changing or you want some feature or whatever (exporter or engine),
you can create mantis tasks so we don't forget and they get picked by free devs...
it's a bit better than a [REQ] post which might never be answered...
(register first to be able to submit bugs/tasks)
yeah we don't have any physical night night time sky/model in the engine (few engines do...)
i suggest a hdr map would be good. (with stars etc...)
model a moon as an area light (mesh emitter) and give it an appropriate emissive colour. (do some research )
that should do the trick with some tweaks
probably normal issues
an exit portal is an object in your scene, which helps the renderer by showing it the optimal route to find a lightsource.
Exit portals are only of use in situations where you have your camera inside an object (like a room/interior) where your
primary lightsources are outside of the object. (like a sky).
Rendering with exit portals simply speeds up the render as it provides some extra information for the engine to more efficiently find an exit like a window or hole in the wall to sample light trough.
regarding glass here's some tips:
- for building exteriors, offcourse
- for interiors with lightsources outside (eg daylight, sunksy, hdr etc) using window glass is tricky as it will slow down your render.
all your paths will need to be refracted twice before being illuminated by an lightsource.
there's a solution to this, an AGM (architectural glass material) like maxwell has.
it's basically a non refracting transmitter/reflector.
lux has it too it's just not in the exporter yet ('kitchensink' material supports it)
i'll add it in alpha 12
So i suggest you don't put glass in your windows yet.
If you need to, combine it with a portal but make sure your portal is NOT inside your windowglass object
eg put it ouside the building in the window frame nearly touching the window glass.
- for interiors lit by interior lightsources (eg dark outside) use glass as this will give you a nice reflection of the interior.
under cam tab environment: None is to have a black background.
Infinite is to have a non black background, it can either a plain color or an image (this image will be wraped on a sphere surrounding the scene), it can be usefull to have clouds in a sky or to render a few objects indoor without having to model a complete room.
Sky is a physicaly modeled sky (color changes according to sun position and atmosphere turbidity.
You can also add a sunlight component to add a physicaly modeled sun lighting (color will depend on sun position).
Render with the lowdiscrepancy sampler in debug mode (-d switch on the command line) with only 1 thread per render (-t 1 on the command line), all random seeds will then be the same. The 1 thread thing is not really a penalty since instead of using more threads per image you can compute more images in parallel.
Arch glass is to be used like normal glass, ie with thick windows. The only difference with normal glass is that light rays are not bent upon entering the glass and lux knows that so light rays just go straight through without being blocked.
The parameters you mention are for the metropolis sampler, not path/bidirectional, ie the algorithm used to determine where to go at a vertex. The metropolis sampler takes the values from the previous sample, and slightly modifies them, then depending on the result (is the new sample more interesting than the previous one) it keeps the new values or goes back to the previous ones.
If you have a very bright firefly, it will be hard to find new values that contribute as much to the image, so to avoid staying too long at the same spot, new values are forcibly accepted after a while (this is controled by the max rejects parameter). This however introduces bias (usually unnoticeable though, except if you set the value really low), for a final rendering, you can raise the value.
It can also happen that due to the nature of the small modifications, you can't go from one part of the image to another one and get stuck around the same spot. To prevent this, once in a while, a set of brand new value is used. The proportion of such brand new values compared to mutations of previous values is tuned with LM prob (a value of 0.4 means 40% of samples will use new values and 60% will be mutations of the previous values).
When taking new values, lux tries to uniformly distribute them on the image by subdividing the image in a grid and putting 1 sample in each cell of the grid. The grid size is configured with stratawidth.
The algorithm needs an estimate of the overall image brightness. To achive this, it throws a random bunch of samples at the beginning and averages the result. The number of random samples used is controlled by init samples.
It's a trick to instruct lux not to bend the light ray when entering or leaving a glass objects. This allows sun light to more easily flow through windows when using the path integrator. When using the bidirectional integrator this trick is much less usefull and is not yet fully exploited by the integrator.
Let's illustrate with the path integrator as it's simpler. With a sample you trace a path and at each vertex of the path you sample 1 or more light sources, if the light is not blocked it contributes to the illumination. What I do is count the number of contributions obtained with 1 sample, this is more interesting than just counting the number of samples that gather no light because with bidirectional, almost all samples gather some light, so the previous computation was useless.
Moreover, while bidirectional processes less samples/second than path, it can gather much more contributions all over the image plane, so the new metrics allow comparisons between path and bidirectional.
If it tells you 450% efficiency it means that on average you get 4.5 contributions with a single sample. I agree it can be a bit surprising though.
The actual parameters relate to radiance, that is W.m-2.sr-1. So to have the total power of a light you have to multiply the settings by its area.
In your example, if your 2 lights have the same color and the same gain, but the large one is 10 times larger than the other, its total power is already 10 times higher.
Here is how to tweak the ior to get correct results:
- use completely normal objects with normals pointing in the natural direction
- let's say your material has ior A, and the buble has ior B
- configure the bubble to have ior B/A by entering the value by hand and you'll have a correct behaviour
- take care that the bubble is completely inside the material otherwise you'll have strange results on the part outside the material
I think your outer wall is a single plane, try to model it as 2 planes back to back so it has some depth, this should fix your problem. In order to avoid self shadowing, intersections too close are discarded so when meshes intersect light can leak, but if the wall has some depth this won't happen.
it's important to model your scene according to lux's conventions, eg 1 blender unit = 1 meter, and then use a realistic lens-radius (eg 0.0001 (1cm) or 0.0002 (2cm)),
currently the DOF is far too big (eg your lens must be a meter large or so?), which makes the building look like it's a small model which is being photographed.
Forgot to mention, with unbiased rendering, for some scenes, if you don't want to wait 40 hours or so,
for the last grain to come out, you can use an image denoising program.
It's not very effective for denoising a render which is very undersampled, as it will create artifacts/blotchy appearance,
but very effective for renders which are 'nearly' noisefree and have a bit of grain left.
It's a common practice used by users of unbiased engines like lux, indigo, maxwell, etc...
You can do it with 'neatimage' or the GPL/free 'greycstoration', google for them.
Note Poncho: grey is no in LuxRender Gui
path tracing is only usually better for exterior skylight rendered scenes.
bidir is better for any type of interior scene.
in fact, bidir/MLT is the most adaptive, and will always give you good results, that's why it's marked 'recommended' in the presets, eg try that first
I think the reason why you have all the fireflies is because you rendered without MLT, which is an area that needs more improvement.
When rendering without MLT, you need to be carefull how you configure the rest of the params to avoid fireflies.
If you use the 'recommended' preset, this should'nt happen...
The fireflies in your render are caustics, with a high contribution because they have a very complex path (lots of specular bounces), but these happen with a very low probability, thereby giving a few occurences, eg fireflies.
MLT solves this by exploring them more thoroughly.
Note about fireflies: Although these may seem to be errors, they are actually valid samples, not render error.
They can be compared to winning lottery tickets, they are paths which are found only once every few million/billion samples, which happen to be 'perfect'.
Perfect means they bounce around your scene and happen to hit the best surfaces & lights from 'perfect' angles.
Due to the unbiased nature of the algorithms, they cannot be discarded, as they are valid samples/paths, but can be a nuisance visually in your image.
When rendering with MLT, these are, like i explained before, mutated (changed slightly) to 'discover' surrounding, similar 'perfect' paths, thereby evening the high values out with more high values.
In short, place your current frame to frame 1,
select fan, press I and select 'rot', then advance current frame to frame 2,
and rotate object slightly (say 20-40 degrees around Z), and again press I and select 'rot',
then set current frame back to 1 and render.
Bidirectional is on average ~40 to 50 % the speed of path. (as it needs to construct 2 paths during each loop)
However it can return more samples per loop as it connects some bounces to the camera/image,
that's why you get much higher efficiency.
Higher efficiency = more samples on your image.
These connections are only for diffuse surfaces (as these are view independent), so if you have a fair amount of diffuse surfaces (or layered diffuse/specular like glossy) in your scene you'll get the most benefit.
Regarding lowdiscrepancy/MLT, i guess the key difference is:
'lowdiscrepancy' is a "dumb" or "bruteforce" sampler. If you're image has parts which are easy to clear, and other areas which are more complex (indirectly lit etc...), the sampler will just continue to sample everything, which in some cases forces you to wait an extremely long time before a difficult area has converged/cleared.
It's well stratified though (even distribution of samples) so it can converge/clear MUCH faster if properly configured on simple scenes. (simple = diffuse surfaces with good direct lighting/visibility)
MLT, although too complex to explain here in short, is an 'intelligent' or 'adaptive' sampler, which tries to even out the work, and adapts to those difficult areas, thereby giving you a more 'even' convergence. Since it 'focuses' on bright samples on the imageplane, whenever a firefly (eg a bright pixel) emerges, MLT will focus on that area, to try to even it out, thereby removing fireflies more effectively.
MLT will be much more efficient than lowdiscrepancy in scenes with complex indirect light and phenomena such as caustics, as these are difficult to find, long and complicated paths (trough glass objects, etc...).
The downside of MLT is that for simple scenes (directly lit diffuse scenes) all the overhead of it's adaptiveness can be much less efficient that 'lowdiscrepancy', however it takes the guess work away and always guarantees good results.
Nevertheless it's very adaptive and as such makes a good default for all scenes (although not always optimal), which is why the preset is 'recommended'
an instance is a representation of an object.
let's say you want to export 3 monkeys,
if you create 1 monkey object, and duplicate it with ALT-D,
you create an instance.
do that twice and you'll have 3 monkeys,
but when you export you only export 1 monkey object,
but 3 references in world space.
with instancing you can literaly clone a detailed object thousands of times,
without having a huge scene file which would be impossible to load/parse.
Lord Crc wrote:While there can be several causes, the primary reason is that it is a side effect of the unbiased sampling that LuxRender does called importance sampling. It's usually compounded by a technique LuxRender uses to reduce overall noise (variance) called Russian Roulette.
Importance sampling greatly increases the efficiency, meaning less noise in the same amount of samples compared to "plain" sampling. However the price is fireflies. Instead of picking samples completely random, they're picked according to how likely they are to contribute significantly to the average value. This allows us to use relatively few samples to get a good idea of what the average value is. However whenever we pick "unimportant" values, we need to compensate for the fact that we pick less of them (compared to the important ones). This means we need to give the "unimportant" values more weight (otherwise the result wouldn't be unbiased).
Sometimes it just so happens that as a path bounces around we continually pick (by chance) an "unimportant" direction at each step of the path. If the path then finally reaches a light source then it will be boosted (compensated) several times, thus becoming much brighter than the other samples.
This was a bit simplistic explanation but it should give you some idea of why it happens.
The reason why metropolis is usually much better at handling fireflies is that it "seeks the light". So when we get a firefly (which is very very bright), the metropolis sampler will get very excited and try lots of variations (mutations) of the firefly sample. However since the firefly was a "freak accident" the mutated samples will mostly result in a value close to the true average. The firefly sample is then "dragged down" by the rest of the samples and the average of the samples quickly approaches the true average (which is what we're trying to find).
Another, less frequent, source of fireflies is bugs. For instance if a material reflects more light than it receives, or there's just something wrong with the code so that sometimes a sample gets multiplied with the wrong value etc.
Hope this helps.
I presume you're using LuxBlend ?
In which case, you need the latest CVS version and you'll find Save/Load/Download options in the '>' menus in the Material and Texture editors.
For direct download you'll need to know the material or texture ID number, which is displayed on the DB when you choose "LuxBlend 0.6" in the Exporter options in your User Control Panel.
(or you can look at the last part of the URL for the ID number, but it's not so obvious).
It's a photographic measurment: Exposure Value.
It should allow you to accurately set up linear tonemapping with ISO/shutter speed/aperture controls.
This is a multiplying factor applied to the raw HDR image data before the actual tonemapping takes place
This is a multiplying factor applied to the tonemapped LDR image data.
This is the only "real" control of Reinhard, and it is a sort of combination of brightness/contrast. It kinda works in a relative sense to "pre-scale", where LOW burn values, or burn values similar to pre-scale produce more bright, high contrast "burned" images.
I assume that Rheinhard is something like Levels in Photoshop. - yeah, something like that.
In my experience lux slows down when using more threads than available cores.
this can be fixed by using a lower sharpness strength in the mitchel filter configuration, just slide the slider a bit to the left, but not too much or you'll get a little blurring like when using a gaussian filter.
The ior relates to the speed of light in a medium. Depending on the speed ratio between 2 media, the light is bent. Since lux currently doesn't keep track of the outer medium (where the normal is), it is assumed to be 1, so the inner medium must be defined with a ior that is the ratio of the true inner ior and the true outer ior.
As an example, if you want to model air bubbles in a glass of water, you can either model them with the normals pointing inside the bubbles and the ior of water, or you can model them with the normals pointed outside the bubbles and the inverse of the ior of water.
This can be because your sphere is huge, but most probably because you haven't assigned a UV map to the sphere and your using UV coordinates for the mapping. Try to use a spherical mapping (see that little "uv" box next to 2Dmap?).
First of all you must understand all units in luxrender are 1 meter, and 1 blender unit = 1 meter in lux.
This is a general rule to obey if you want to control lux precisely in a general way and exchange data with our matdb.
Second, the preview scene sphere in luxblend is 1 meter wide.
if your sphere in your scene is 100 meter wide, the texture will appear very small, this applies to all non-mapped texture, eg 3D procedurals, which are the best mapping-less textures to work with in most cases.
If you measure the width in your scene of the sphere, and it's for example 50 units in diameter,
you can set the sphere in luxblend's preview to be 50 units wide too, with the 'width' parameter next to it.
This will allow you to see the texture in the same size as the object you'll apply it to.
Alternatively, you can rescale/rotate/move a 3d procecural texture using the 'texture transformation' option in the lower area of the luxblend material editor (collapse button above subdivision options). increase/decrease the 'size' param.
Currently you have 2 options to have a fake background:
- use an environment map lighting
- composite a background in postpro (in which case you need to output the alpha channel and it'd be better to render with the premultiply alpha option turned on)
It is saved in the same folder where the .lxs file is exported to. It will have the same name as the .lxs file (except end with .png or similar).
In the lower right corner of LuxBlend, there's a button called "def". If you "uncheck" it, it will instead ask you where to export the files when you press Export. The filename of the image will then be the same as the lxs file (except with .png or similar).
Also you can see the location where the image is saved in the log window.
Not yet. You can however copy the rendering to the clipboard (View->Copy) or save the flm (raw rendering buffers data).
I'll try to explain how MLT works in a simple way:
MLT uses the luminance (eg intensity) of the contribution of paths, eg the pixels that appear on your film.
It holds a current, eg accepted, value, for which it also stores the data needed to construct the path. (it starts out with a random one)
New paths are constructed as mutations of the accepted path most of the time, however it might also use a fresh path (this is governed by the lmprob (large mutation probability) parameter.)
The non-mutated paths, eg fresh ones are thus called large mutations.
EDIT: the lmprob parameter is thus the inverse value (the 'strength' is the same but not inversed, in the simplified configuration settings in luxblend) to tune the amount of mutation. if you set this low, you'll have plain random path tracing, if you set it very high you'll have lots of mutations, which is not always the best bet neither, best leave it at the default or for these kinds of scenes, you could increase the strength a bit (or lower lmprob, which is the same, in advanced config)
Every time a new path is traced, whether it's a small mutation or a large mutation, it compares the luminance of the path, eg the brightness of it's contribution to the image, to the current accepted path, and depending on how much 'brighter' it is, it might choose to accept it as the new current path. (this last choice can be overridden by the force accept parameter to metrosampler)
MLT thus loops over (eg mutates around) bright pixels in your image, the brighter the more mutation/exploration they will get.
Therefore it's an ideal algorithm for exploring bright caustics.
Users browsing this forum: No registered users and 2 guests