LuxRender v1.0 GUI proposal
1. GUI TOOLKIT
We will use wxwidgets.
- FLTK v1.0 is too simple
- FLTK v2.0 is not mature enough
- QT has licensing difficulty on win32
- GTK+ is good, but wxwidgets supports it already on linux/unix and has better native underlying toolkits on other platforms.
Use of wxwidgets will use the native win32 gui toolkit on windows platforms.
The underlying toolking for linux systems should be 'GTK+' which is supported by wxwidgets.
On Mac OS X wxwidgets support seems to be included by default,
and it even supports carbon and non-carbon underlying toolkits.
Care must be taken to make the layout and build as simple as possible,
so people on linux/unix and mac can build easily.
wxwidgets supports OpenGL integration.
We can reuse our existing opengl image viewing/panning port code (by zcott) in the new gui.
there seems to be documentation about migrating opengl apps from FLTK to wxwidgets. (google for it)
IT's IMPORTANT that we have our GUI communicate and control the engine ONLY via the API.
we cannot have any wxwidgets types or includes in the engine outside of renderer/
whenever a new call is necessary and needs to be implemented, and you don't know how, please ask me (or any other engine developer)
2. GENERAL LAYOUT / VIEWPORTS
wxwidgets supports a grouping/tab widget known as 'wxauinotebook'.
This is a very interesting tab/grouping system which allows the user to split and drag/pane different tabs
across the GUI. (a bit like the window panes in Blender)
MAIN RENDER WINDOW IMAGE:
The GUI will have 3 default tabs. (refered to as viewports from now on)
- 'render' : display contents of render buffer and allow to control engine
- 'log' : display all console output in a logwindow of the luxgui/engine
- 'output' : same concept as 'render' but will become an integrated tonemapper like 'violet'.
Phase 1, what we start building now is the main GUI window,
the 'render' and 'log' viewports.
Phase 2, once we're more or less happy with phase 1 will continue on the work and implement an integrated tonemapper/postprocessor.
however some concepts/code from the general viewport concept will be shared between both 'render' and 'output',
so it's important that we think about this stuff now, to write reuseable code.
Each viewport has a bar ontop.
This control-bar as i will call it from now,
controls the viewport in question.
A controlbar has specific buttons/controls on the left, unique to the viewport in question.
It has a common set of viewport manipulation controls on the right, which are visible/useable in all viewports.
Please note that 'log' is not a true viewport so it does'nt need a controlbar.
(currently 'render' and 'output' have one)
Examine this mockup which shows the idea of viewports/controlbars:
I have dragged the output viewport to share the screen horizontally,
to show both 'render' and 'output' aligned.
As you can see the controls on the right of the controlbar are same for both ports.
Both viewport (the render buffer, and the output tonemapping buffer) have our opengl
code in them to allow zooming/panning.
The 2 buttons on the right of the viewport controls are basically the same as the
function of the middle and right mouse buttons. (zoom to actual pixels / fit to screen)
left to them is a button with an eye icon.
this is a simple push button which will update the buffer.
next to it is the button with the clock,
this is a toggle button (on by default) which activates or deactivates automatically
updating the viewport using the time in seconds which can be adjusten with the slider next to it.
(the default value of 12 seconds is the value for ldr_display_freq specified in the scenefile.)
A user can disable updating if he does'nt need it.
The GUI should also do this automatically if the window is minimized to avoid wasting CPU cycles
Next to the display update controls is an identical set of 2 buttons with a slider.
This is basically the same as above but to control writing film buffer to file.
One can disable it at his choice and hit the button with the small image in the bin to save manually,
or enable and select the write frequency)
Both frequencies should have selectable values between min 5 seconds and max 480 seconds.
The only difference in concept between these controls in the render and output viewports is:
- render will save the fleximage contents to our resumable file format.
- output will allow to choose the LDR/EXR file and write to that instead.
For now, we don't need to worry about actually making the 'output' port work,
just keep all this in mind so we write reuseable controlbar code.
It's important to consider what both windows show.
The 'render' window is only meant to show you what the engine is doing.
It does not need any tonemapping parameters,
it just fetches a tonemapped image from the engine with default reinhard 1/1 parameters.
It's contents are not meant to be saved to a TGA file and used as artwork.
It does not need any controls to save to image formats neither,
it just allows control to save to our resumable fleximage fileformat.
During phase 2, the implementation of 'output' we will have functionality for a user
to either choose to tonemap/save to LDR from the render buffer, or work off of one of these fleximage files,
from a previous 'render' session.
Now on to the left of the controlbars.
These have unique functionality for the viewport in question.
I have included some ideas for the 'render' port.
It's what we currently have, our play/pause/restart buttons and thread control. (add thread etc...)
I've added a second one called 'network slave control' which works the same but for network slaves.
The idea is that a subwindow will pop up and present the user with options to enter/select network slaves.
(this stuff does'nt need to be thought out/implemented just yet, it will depend on dade's work)
As you can see i have not provided any details yet regarding controls for 'output'
as i suggest we do the phase 1 first, to keep things simple.
The 'output' port will have different tabs and UI controls for tonemapping, filtering, light mixing etc...
The reason why we leave this for now is that we still need to implement these options completeley in the fleximage code.
REGARDING DESIGN AND ICONS ETC
don't worry too much about that now.
we need to make sure the basic concepts are implemented correctly, and well organized visually.
we can use mock icons for now, and spend some additional effort to make nice ones once we have an initial playversion.
(i can get some help for this)
the mockups above are also very coarse, there can be a lot of work done to make things fit and scale better.
don't treat them as a final visual arrangement
I think it's a good start.
I have attached a copy of my wxformbuilder v3.0.56 RC8 project which you can open if you download/install it.
This is a simple/quick mockup in the wxformbuilder designer.
It's not meant to be used.
It's not well categorized and named from a code point of view.
It's just meant to serve as an example to build a correctly organized replica instead.
Once we've reached phase 2 i will have more detailed mockups/projects for the 'output' UI.
It's important that we write the code in a reuseable way.
we might want to have other types of viewports with opengl navigation and controls for other concepts in the future.
I suggest Ratow analyzes this document and tries to replicate this or a similar concept.
We can 'tune' the idea on the way.
However it's important that we start work on this asap,
and don't turn this topic into a neverending spaghetti of ideas and discussion to complicate development even more,
or we will not have a useable new GUI anytime soon and lots of features which can't be used properly.
Comments and critique can be given when a first version is realised.