Using Amazon Elastic Compute Cloud (EC2) To Add Rendering Capacity
From LuxRender Wiki
This tutorial is designed to introduce you to Amazon Web Services Elastic Compute Cloud (EC2) and how you can use it to provide additional rendering capacity when needed.
This tutorial is meant to be a detailed and incremental lesson on how to launch an EC2 instance running a Linux Ubuntu AMI, install and run LuxRender on it, and render files. This tutorial assumes that you have a good understanding of LuxRender, but don't know anything about EC2 or working in a command-line environment. We will start off with the basics of configuring an instance and getting it running, and then move on to using multiple instances networked together.
Go to http://aws.amazon.com/ and sign up for an account. You must have a credit card and telephone. Amazon Web Services has a free usage tier that allows you to learn how to set up and manage instance for free. This allows you to play around with the EC2 service and learn for free. To learn more about the free tier eligibility go to http://aws.amazon.com/free/
Use the EC2 Getting Started Guide to familiarize yourself with the services that Amazon has to offer.
From inside your EC2 Dashboard click on the Launch Instance button. A window will open giving you two options: one to launch using the Classic Wizard or the Quick Launch Wizard. Choose Classic Wizard.
Select the Ubuntu Server 12.04 LTS Machine Image.
The next window promts you to select an Instance Type from a dropdown menu. The t1.micro instance is "free tier eligible" if you are a new AWS user.
The next window will allow you to set-up Advance Instance Options, leave these at the default and click Continue.
The next window will allow you to give the instance a name. This can be left at the default for now. Click Continue'.
The next window will prompt you to either choose an existing Key Pair that you have already set-up, or more likely you will need to create a Key Pair. Give your Key Pair a name and click Create & Download your Key Pair.
The next window will allow you to configure a firewall. This configuration will be saved so that you only need to set this up once. We will need to set up two security rules. First select SSH from the drop down menu. If you want to limit access to the instance through SSH to certain IP address, enter that address or range, otherwise leave this at the default 0.0.0.0/0 to allow access from any computer that has access to your Key Pair. The next rule is only necessary if you want to network multiple instances together to render the same file. Select Custom TCP rule from the drop down menu and then enter 18018 in the Port range field. Leave the Source at the default 0.0.0.0/0. Click Continue
The final window gives a summary of the parameters of the instance(s) that you are launching. Click the Launch button.
Once your instance has finished launching you will be able to see it in your management dashboard. Moreover, selecting the instance will display the information necessary for loggin into it.
Connecting To Your Instance Using Terminal
Next we will need to connect to the instance so that we can set it up.
If you are on a Unix machine, Mac OSX or Linux, start a terminal session.
In a new terminal window connect to the instance with the follow command: ssh -i keypairpath/keypair.pem ubuntu@publicDNS
The public DNS for the instance is found in the EC2 Instance Dashboard for the instance that is currently selected.
If you are logging into the instance for the first time you will be asked if you want to continue. Type "yes".
Next we need to install two library dependencies that are not packaged with the Ubuntu Server operating system but are required to run LuxRender.
Type (or copy and paste): sudo apt-get install libglu1-mesa
After reading the packages, you will be asked if you want continue. Type: 'y' and the files will install.
Now for the second library, type: sudo apt-get install libsm-dev
Again type: y to confirm you want to install the packages.
The next two applications are nice to have, but not necessary. They are zip and unzip. If you wish to install these, type: sudo apt-get install unzip zip
Next we need to install LuxRender. In your web browser, go to the LuxRender install page and right-click on the Linux No OpenCL 64-bit version icon and select "Copy Link" (This may differ slightly depending on your operating system and your web browser).
Return to the terminal window and type: wget copied_link_from_LuxRender_download_page
The ls command prints out a list of all files and directories in the current directory you are in.
You will now be able to see the tar file that you downloaded. We need to unpack it now. Type: tar -xf file_name
You will now be able to see the directory that was created when the tar was unpacked.
We will now remove the tar file, and then rename the directory for LuxRender.
Type: rm tar_file_name
Type: mv current/lux/directory/file_name new_name
rm is short for remove, and mv is short for move. the mv command can be used to move a directory or file from one location to another. If no path is specified, then it will simply rename the directory or file
Typing: ls again will show that the tar has been deleted and the original LuxRender directory has been renamed.
Your instance is now configured for rendering. At this point it is a good idea to take a snapshot of the instance so that you can launch more instance just like you have this one setup at this instant in time. This will save you the trouble of installing dependancies and LuxRender on instances that you launch in the future.
Select the instance in your Instances Dashboard and navigate to the Instance Actions dropdown menue and select Create Image (EBS AMI). EBS AMI is short for Elastic Block Store Amazon Machine Image. You will be prompted to give the image a name and description. Naming the image with the date as well as the version of LuxRender you installed is helpful. It will take several minutes for the image to be created, during which time you will be kicked out of the instance. You will be able to connect again once the snapshot has been created.
Connecting To Your Instance Using PuTTY
1. Start PuTTY.
2. Under 'Session' enter 'root@<hostname> (for example: 'email@example.com') or root@<ip_address>. I type 'root@' and then paste the hostname after it so as to prevent typos.
3. In the left-hand panel click 'Connection', then 'SSH', and select 'Auth'. The PuTTY Configuration dialog box appears. Click Browse, and select the PuTTY private key file you generated and named 'id_rsa-gsg-keypair.ppk' if you followed the instructions in the 'Appendix: Putty' section.
4. Click 'Open' to connect to your Amazon EC2 instance.
5. You may get a Putty Security Alert about a host key not being cached in the registry. Click Yes.
6. You should be presented with a terminal screen welcoming you to a Ubuntu session and ending with a command prompt that looks something like 'root@ip-<ip address> #' See screenshot image below.
NOTE: I have had it occur that the instance starts without Ubunu loading! Instead of the above you will get a command prompt that looks something like 'root@ip- #' (no IP address) and no Ubuntu welcome message. If that occurs, you may have to reboot the instance. Click on 'Instances' in the left hand column, select your new instance, RMB then select Reboot. After a few minutes restart your Putty session and confirm a correct Ubuntu start message and prompt.
Uploading Files to Your Instance for Rendering
Before uploading your LuxRender Scene File, be sure to copy any image textures that you are using in the folder that contains the .lxs file, as LuxRender will automatically look in this directory for any images that are required to render the scene. Then zip up the folder using whatever type of compression method you prefer.
There are many ways to get files into and back out of your instance, but using the scp command (Secure CoPy, which is a component of ssh) is the easiest and doesn't require uploading the files to a web server, or installing an ftp server on your instance.
In a new Terminal window type: scp -i path_to_keypair/keypair.pem source_file_location/file ubuntu@public_DNS:/path_to_copy_file_to
Once the file has been finished uploading from your local computer to the instance, you can close the scp terminal window, and switch back to the Terminal window in which you have logged into your instance, and type the ls command inside of the directory you copied the file to. You should now be able to see the zip file.
Now unzip the file by typing: unzip file.zip
Type: ls to see the directory of the unzipped file
If you wish to, you can remove the zip file by typing: rm file.zip
Rendering a File
Once you have uploaded a scene file to the instance, you can start rendering it.
This will print the working directory, which will show where you are at in the computer. Now change directories into the LuxRender directory using the "cd" command.
Type: cd LuxRender
Typing ls inside of the LuxRender directory will print a list of all the directories and files inside of the current directory.
Now we are going to render the lux scene file that we have uploaded to our instance. While still in the LuxRender directory, type: ./luxconsole path_to_Lux_Scene_File
The ./ command execute the file following this command located in the current directory.
Your file should now be rendering, and the terminal window should have an output identical to the Log Tab in the LuxRender GUI.
Downloading a Rendered File from the Instance
You can transfer the rendered image back to your machine with the same scp command that we used to upload the scene file to the instance, but this time it is used in reverse.
In a new terminal window type: scp -i path_to_keypair/keypair.pem ubuntu@public_dns:/path_to_source_file/filename.png /path_to_copy_to_on_local_machine/
You can transfer any file in this way: png, tiff, flm, etc.
Once the file transfer has finished, you can close the Terminal window.
Instance Type Rendering Value
The different instance types that Amazon offers in the EC2 have drastically different amounts of computational power. As there is also a wide range of costs for the different instance types, I decided to render the two example scenes that shipped with LuxRender0.8, LuxTime_by_freejack and SchoolCorridor_by_BYOB, each for one hour. The results from the two scenes were averaged together and plotted below. Most of the high CPU instance types, with the exception of the cc2.8xlarge, give relatively good value in regards to computational power per dollar. However, if you are rendering a scene that requires more ram than is available with a high CPU instance type, you can use a high memory instance type, but at a lower value in samples per pixel per dollar.
Though not a fair comparison, this data was extrapolated to help users know what amount of rendering power can be expected.
|kC/s||Date||Executable type||LuxRender version||Hardware description||OS description|| Author
|1410||2012/5||Luxrender||1.0 RC1||cc2.8xlarge||Ubuntu 12.04 Server||Jack Eden|
|1170.00||2012/03/26||Luxrender (OpenCL)||0.8 official release||Intel i7-3930k, 4.7 Ghz||Sabayon Linux 8.0 (Gentoo kernel 3.2-r12)||Nofew|
|1130.00||2012/01/11||luxconsole (no OpenCL)||0.8 official release||Super Micro (6016GT-TF-FM209) 12 core with dual NVIDIA M2090 Tesla||CentOS Linux 6.1 x86-64||rovitotv|
|1053||2012/5||Luxrender||1.0 RC1||cc1.4xlarge||Ubuntu 12.04 Server||Jack Eden|
|771.29||2011/07/05||luxrender.exe||0.8 official release||i7 2600k, 4,6 GHz||Windows Vista 64||callistratis|
|750||2012/5||Luxrender||1.0 RC1||m2.4xlarge||Ubuntu 12.04 Server||Jack Eden|
|731.20||2011/06/27||luxrender.exe||0.8 official release||i7-970, 3,2 GHz||Windows 7 64||Abel|
|662.03||2011/06/13||luxconsole (no OpenCL)||0.8 official release||Phenom X6 1090T, 4.0 GHz||Gentoo Linux x86-64||SATtva|
|595.83||2011/06/13||luxconsole (no OpenCL)||0.8 official release||Intel Core i7 860, 3.62 GHz||Ubuntu Linux 11.04 x86-64||LadeHeria|
|537||2012/5||Luxrender||1.0 RC1||c1.xlarge||Ubuntu 12.04 Server||Jack Eden|
|527.88||2011/06/13||luxconsole (no OpenCL)||0.8 official release||Phenom X6 1090T, 3.2 GHz||Ubuntu 10.04 64||Abel|
|507.57||2011/06/13||LuxRender.app||0.8 official release||Xeon W3530 2.8Ghz||OS X 10.6.7||JtheNinja|
|507.12||2012/01/29||luxconsole||0.8 official release||2,66 GHz Quad-Core Intel Xeon||Mac OS X Lion 10.7.2 (11C74)||Decamino|
|504.70||2011/06/13||luxrender (no OpenCL)||0.8 official release||Phenom X6 1090T, 3.2 GHz||Ubuntu 10.04 64||Abel|
|486.29||2011/06/19||luxrender.exe||0.8 official release||Xeon x3470, 2.93 GHz||Windows 7 64||Abel|
|449.16||2011/07/21||luxrender.exe||0.8 official release||Intel Core i5 2500k, 3.3 GHz||Windows 7 64||twilight76|
|430.58||2011/06/13||luxrender.exe||0.8 official release||Intel Core i7 920, 2.67 GHz||Windows 7 64||moure|
|395.81||2011/06/27||luxrender||0.8 official release||Phenom II X6 1055t, 2.8 GHz||Ubuntu 11.04 64bit||B.Y.O.B.|
|385||2012/5||Luxrender||1.0 RC1||m2.2xlarge||Ubuntu 12.04 Server||Jack Eden|
|330.02||2011/08/06||LuxRender.app||0.8 official release||Intel Core2 Quad Q9550@ 2.83GHz||OSX 10.7.0||Eros|
|294.88||2011/07/09||luxrender.exe||0.8 official release||Intel Core2 Quad Q9550@ 2.83GHz||Scientific Linux 6 64bit||Eros|
|293||2012/5||Luxrender||1.0 RC1||m1.xlarge||Ubuntu 12.04 Server||Jack Eden|
|269.79||2011/06/23||luxrender||0.8 official release||Intel Core i7-720QM, 1.60 GHz||Ubuntu Linux 11.04 x64||gumtree|
|257.73||2011/06/23||luxrender.exe||0.8 official release||Intel Core i7-720QM, 1.60 GHz||Windows 7 64||gumtree|
|194||2012/5||Luxrender||1.0 RC1||m2.xlarge||Ubuntu 12.04 Server||Jack Eden|
|144.71||2011/06/22||LuxRender.app||0.8 official release||Core2Duo 2.53GHz||OS X 10.6.7||Eros|
|140.24||2011/06/22||luxrender.exe||0.8 official release||Core2Duo 3.0GHz (E8400)||Windows 7 64||edna|
|136||2012/5||Luxrender||1.0 RC1||c1.medium||Ubuntu 12.04 Server||Jack Eden|
|112||2012/5||Luxrender||1.0 RC1||m1.large||Ubuntu 12.04 Server||Jack Eden|
|72.70||2011/06/13||luxrender.exe||0.8 official release||Core2Duo 2.4 GHz (P8600)||Windows XP||Abel|
|68.53||2011/07/13||luxrender (no OpenCL)||0.8 official release||Pentium SU4100, 1.3GHz||Ubuntu 11.04 32bit||B.Y.O.B.|
|64||2012/5||Luxrender||1.0 RC1||m1.medium||Ubuntu 12.04 Server||Jack Eden|
|33||2012/5||Luxrender||1.0 RC1||m1.small||Ubuntu 12.04 Server||Jack Eden|
|15||2012/5||Luxrender||1.0 RC1||t1.micro||Ubuntu 12.04 Server||Jack Eden|
Reducing Rendering Costs with Spot Instances
You can bid on Amazon EC2's unused capacity by using Spot Instances. Spot Instances are generally cheaper than on-demand instances. Spot prices fluctuate with demand, so the only way to see the current spot price of an instance type is to log into your EC2 console, go to the Spot Request Page, and then click on Pricing History.
Requesting a Spot Instance works the same way as launching an on-demand instance, except that you specify a maximum spot price for the number of instances you wish to launch. Even if your initial bid is higher than the current spot price, your spot instance request is usually not fulfilled immediately. Spot Instance can take from several minutes to an hour or two to become available depending on availability.
Once your Spot Instance launches, your instance will stay active as long as your bid remains higher than the current spot price. If the spot price rises above your maximum bid price, your instance will be terminated, and all files stored locally on the instance will be lost.
There are two ways to gain persistent storage so that you will not lose your rendered files in the event your instance is terminated. These are Amazon's S3 service and EBS Volumes.
Persistent Storage Using S3
Amazon's S3 (Scalable Simple Storage) is an easy way to add persistent storage to your instance. One other option is creating and connecting an Elastic Block Store, or EBS, to the instance. An advantage of S3 over EBS is that files can be uploaded and downloaded or previewed in the AWS Console in your web browser. This allows you to easily monitor the progress of a render from a web interface, as you will be able to quickly look at the png files that luxconsole writes to the specified interval in your scene file. Another advantage of S3 over EBS is that the same S3 bucket can be simultaneously mounted to multiple instances, unlike an EBS volume which can only be concurrently mounted to a single instance. Of course you must be sure not to read/write to the same file from two different instances.
Although cost effective, be sure to familiarize yourself with the costs associated with using S3.
In order to connect an S3 bucket to an instance, some files must be installed and a password file created. It is best to perform these task on an clean instance but which you have already set up prior dependancies on, because you will want to take a snapshot of the instance when you are done, so that any instances that are spun up in the future will have the S3 dependencies installed on them.
Prior to using Amazon EC2, I personally had never worked in a command-line environment. One of the purposes of this tutorial is to familiarize other artists who may be unfamiliar with the command line with commands to get things done.
First we are going to make sure apt-get is up to date. Type: sudo apt-get update
Next we are going to download and install the required libraries and dependencies for s3. Type: sudo apt-get install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support make
The package files will be read and then you will be asked if you wish to continue. Type: y
Now download the s3fs code from google with: wget http://s3fs.googlecode.com/files/s3fs-1.61.tar.gz
Now unpack the file with: tar xvzf s3fs-1.61.tar.gz
Now we are going to change directories into the directory that was created by unpacking the tar. Type: cd s3fs-1.61/
Then type: ./configure
Then type: make
Then type: make install
Now we are going to create a file in the /etc/ directory. Type: sudo touch /etc/passwd-s3fs
Now change directories up to the /etc/ directory which is a top level directory. Type: cd ..
This will move you up directories. Typing: cd ../.. allows you to move more than one level at a time.
Once at the top directory level, type: cd etc to move into the etc directory.
If you type: ls a list of all files in the directory will be displayed. You should see the file passwd-s3fs this is the file that was created with the touch command. We are going to edit this file with the nano command. Type: sudo nano passwd-s3fs
You need to copy and paste your access key and secret access key into this file. First you need to get your access and secret access keys. When you are logged into the AWS management console, near the upper-right hand corner of the screen you should see My Account/Console as a drop-down menu. Inside of this menu is a Security Credentials line, click on it. This will take you to a page that will show you your access and secret access keys.
Inside of the passwd-s3fs file, paste the access and secret access keys in the format of: access_key:secret_access_key making sure there are no spaces or carriage returns. Then save the file and exit out with ^X
We need to make the passwd-s3fs file private. Type: sudo chmod 640 /etc/passwd-s3fs
Now you need to make a directory to which the S3 Bucket can be mounted to. I name mine s3 and placed it in the /mnt/ directory. Type: sudo mkdir /mnt/s3
An S3 bucket can be mounted to any empty directory, e.g. you could create a directory in /home/ubuntu named mount and thus the mount point would be /home/ubuntu/mount. Note that the bucket_name will never be used in any file hierarchy once the bucket is mounted.
Your instance is now ready to mount an S3 Bucket. If you have not yet created a Bucket, go to the S3 tab in your AWS console, and create a Bucket. When you have a Bucket created, you can mount it to your instance with the command: sudo s3fs bucket_name /mnt/s3
If you have created your mount-point directory in a different place or with a different name, you will need to change the above command to suit.
There are very specific S3 Bucket naming conventions. Each Bucket in the global S3 system must have a unique bucket_name, so two users of the Amazon S3 system could not each have a Bucket with the same bucket_name. See the Amazon S3 Best Practices article to learn more about this.
If your S3 Bucket mounted successfully, now is a good time to unmount the bucket and take a snapshot of the instance so that you can spin up instances in the future that are pre-configured to mount S3 buckets.
You can unmount a bucket by typing: sudo umount /mnt/s3
Again, if you have created your mount-point directory in a different place or with a different name, you will need to change the above command to suit.
You should now be able to cd into your mount directory and see any files you have placed in your S3 bucket through the AWS S3 web interface.
Note: the author of this tutorial has yet to learn how to mount a bucket as a non-root user. In order to change directories into the s3 bucket, you must be the root user. Become the root user by typing: sudo bash. You can exit out of the root user by typing: exit. When calling luxconsole and pointing luxconsole to render a scene file in an s3 bucket, you must either be the root user, or the sudo command must be used in order for luxconsole to be able to read from and write back to the bucket, e.g.: sudo luxconsole /mnt/s3/lux_scene_file.lxs. If another user of this wiki can figure out how to allow non-root users to mount and read-write to buckets, please add these detailed instructions to this wiki page.
You can now render files in your S3 Bucket by directing luxconsole to them.
You can now use spot instances that have an S3 bucket mounted to them, and in the case your instance is terminated due to the spot price increasing above your bid price, you will not lose your rendered work.
Transferring Large Files Out of S3
It has been the experience of the author of this tutorial that using the Amazon Management Console S3 web interface has historically been very slow for downloading large files. If you are experiencing slow download speeds, one method to consider for moving large files from your S3 bucket to your local machine is to zip up the files from an EC2 instance while the bucket is connected to the instance, then use the copy or cp command to copy the file or zipped files from the Bucket into a local directory in the instance and use the scp command, explained above, to move the files from the instance to your local machine. Note that the move or mv command should be used with caution, or avoided altogether, as the files will be completely moved from the S3 bucket to the instance, and in the case the instance is inadvertently terminated, or, if using spot instances, terminated due to an increase in the spot price, your data will be lost.
Persistent Storage Using EBS Volumes
An alternative to using S3 is to create and attach an EBS volume to an instance. If your render files are located inside an EBS volume and the instance is inadvertantly terminated, your files will persist. However, to access them, you must spin up a new instance and connect the EBS volume to the newly spun-up instance.
To use EBS Volumes, you must first create a new volume.
Inside of the Amazon EC2 console, click on Volumes under ELASTIC BLOCK STORE. Then click on Create Volume. A hovering window will pop up and allow you to input the size of the volume, its availability zone, as well as any snapshots you wish to use that have pre-loaded data in them. We want to create a blank volume that we can fill with render files, so choose No Snapshot.
After you have spun up your instance, take note of the instance name/number that Amazon assigns to it. Inside of the Volumes window in the Console, right-click on the EBS volume that you want to mount to your instance and choose Attach Volume and select the instance from the drop down list.
Once you ssh into the instance cd into the /dev/ directory to confirm that you can see the attached volume xvdf. First we need to format the volume. However, if the volume contains data that you need to access, do not format, as formatting will erase all data in the volume. If you just need to mount the volume, skip this step.
Format the volume with the command: sudo mkfs -t ext3 /dev/xvdf
Next you need to create a directory to mount the volume to. Type: sudo mkdir /mnt/data-store
Note: You can name the mount point whatever you want. Instead of data-store, you could name it LuxScenes.
And finally mount the volume by typing the command: sudo mount /dev/xvdf /mnt/data-store
Any files that you place inside of the directories located within the mount directory will still exist on the EBS volume in the event that the instance is terminated.
Network Rendering with Amazon EC2
Although it is possible to network EC2 instances to your local machine to act as server nodes for rendering, this practice is highly discouraged, as the internet connection to your home or office is usually not stable enough to complete long renders for more than a few hours. Besides this, there will usually be a speed bottleneck between your EC2 instances and your local machine, as the FILM files that are passed from the server nodes to the master are quite large.
If you need the expanded computing power that can be gained by networking multiple machines together, it is recommended that you spin-up multiple EC2 instances, SSH into each instance, set one of them to be the master and the remaining instances to be slaves, and do all your network rendering locally inside of the EC2 network. If you wish to use your local machine(s) along with EC2 instances to render a single image, it is recommended that you merge the FILM file from EC2 with the FILM file from your local machine(s) once the desired SPP has be achieved.