Fr Using Amazon Elastic Compute Cloud (EC2) To Add Rendering Capacity - LuxRender Wiki
Luxrender GPL Physically Based Renderer

Fr Using Amazon Elastic Compute Cloud (EC2) To Add Rendering Capacity

Personal tools

From LuxRender Wiki

Jump to: navigation, search

Utiliser le service EC2 d'Amazon pour multiplier vos capacités de rendu

Ce tutoriel est conçu pour vous présenter les Services Web d'Amazon "Elastic Compute Cloud" (EC2) et vous montrer comment l'utiliser pour multiplier vos capacités de rendu en cas de besoin.

Ce tutoriel est destiné à être une leçon détaillée et progressive sur la façon de lancer une instance EC2 exécutant un AMI sous Linux Ubuntu, installer et exécuter LuxRender sur lui, et faire le rendu de fichiers. Ce tutoriel suppose que vous ayez une bonne compréhension de LuxRender, mais que vous ne sachiez pas ce que sont les EC2 ou comment travailler dans un environnement en ligne de commande. Nous allons commencer avec les bases de configuration d'une instance et la faire fonctionner, pour ensuite passer au travail de plusieurs instances reliées en réseau.

Contents


Préparation

Aller sur http://aws.amazon.com/ et inscrivez-vous pour un compte. Vous devez avoir une carte de crédit et un téléphone. Les services Web d'Amazon ont un tarif d'utilisation gratuit qui vous permet d'apprendre à configurer et gérer une instance gratuitement. Cela vous permet de jouer avec le service EC2 et d'apprendre gratuitement. Pour en savoir plus sur le niveau admissibilité à la franchise, allez sur http://aws.amazon.com/free/

Utilisez le Guide de démarrage EC2 pour vous familiariser avec les services offert par Amazon.

A l'intérieur de votre tableau de bord EC2, cliquez sur le bouton Lancez une instance.
Une fenêtre va s'ouvrir, vous donnant deux options: Lancer l'Assistant d'aide soit "Classique", soit "Lancement rapide".

Choisissez l'assistant "Classique"!
Sélectionnez un serveur AMI sous Ubuntu 12.04 LTS.

EC2 tut Wizard 1.jpg

La fenêtre suivante vous demande de sélectionner un type d'instance à partir d'un menu déroulant. L'instance t1.micro est "Free tier eligible" si vous êtes un nouvel utilisateur AWS.

EC2 tut Wizard 2 1.jpg

La fenêtre suivante vous permettra de mettre en place les paramètres de l'instance. Laissez ceux-ci à la valeur par défaut et cliquez sur Continuer.

La fenêtre suivante vous permettra de lui donner un nom. Laissez à la valeur par défaut pour l'instant. Cliquez Continuer.

La fenêtre suivante vous invitera à choisir soit une paire de clés existante que vous avez déjà mis en place, soit de créer une paire de clés. Nommez votre paire de clés et cliquez sur Créer & Télécharger votre paire de clés.

EC2 tut Wizard 3 1.jpg

La fenêtre suivante vous permettra de configurer un pare-feu. Cette configuration sera sauvegardée donc vous n'aurez à la régler qu'une fois. Vous aurez besoin de mettre en place deux règles de sécurité.

Sélectionnez d'abord SSH à partir du menu déroulant. Si vous souhaitez limiter l'accès à l'instance via SSH à certaine adresse IP, saisissez ceux-çi ou un plage d'IP, sinon laisser ce champ à la valeur par défaut 0.0.0.0/0 pour permettre l'accès de tout ordinateur ayant accès à votre paire de clés.

La règle suivante est uniquement nécessaire si vous souhaitez mettre en réseau plusieurs instances ensemble pour faire le rendu d'un même fichier. Sélectionnez Règle personnalisée TCP à partir du menu déroulant, puis entrez 18018 dans le champ Port. Laissez la valeur par défaut pour la source: 0.0.0.0/0. Cliquez sur Continuer .

EC2 tut Wizard 4 1.jpg

La dernière fenêtre donne un résumé des paramètres de l'instance(s) que vous allez lancer. Cliquez sur le bouton Lancer.

EC2 tut Wizard 5 1.jpg

Une fois que votre instance a terminée son lancement, vous la verrez figurer dans votre tableau de bord de gestion. En outre, la sélection de l'instance affichera les informations nécessaires pour s'y connecter.

EC2 tut Instance 1.jpg



Se connecter à votre instance en utilisant une fenêtre de terminal

Ensuite, nous aurons besoin de nous connecter à l'instance afin que nous puissions la mettre en place.

Si vous êtes sur une machine Unix, Mac OSX ou Linux, démarrez une fenêtre de terminal.

Dans cette fenêtre, connectez-vous à l'instance avec la commande suivante:

ssh -i keypairpath/keypair.pem ubuntu@publicDNS

Le public DNS de l'instance se trouve dans le tableau de bord instance EC2 pour l'instance actuellement sélectionnée.

Si vous vous connectez à l'instance pour la première fois, on vous demandera si vous souhaitez continuer. Tapez "oui".

EC2 tut Terminal 1 1.jpg

Ensuite, nous devons installer deux dépendances de la bibliothèque qui ne sont pas fournis avec le système d'exploitation Ubuntu Server, mais sont nécessaires pour exécuter LuxRender.

Tapez (ou copiez & coller): sudo apt-get install libglu1-mesa

Après avoir lu les paquets, il vous sera demandé si vous souhaitez continuer. Tapez: 'y' et les fichiers seront installés.

Maintenant pour la seconde librairie, tapez: sudo apt-get install libsm-dev

A nouveau tapez: y pour confirmer que vous voulez installer ces packages.

Les deux prochaines applications sont intéressantes, mais pas nécessaires. Ce sont zip & unzip. Si vous souhaitez les installer, tapez: sudo apt-get install unzip zip

Ensuite, nous devons installer LuxRender. Dans votre navigateur Web, accédez à la page du site "Installer LuxRender" et cliquez-droit sur la version icône 64-bit Linux "Non OpenCL" et sélectionnez "Copier l'URL" (elle peut différer légèrement en fonction de votre système d'exploitation et votre navigateur web).

Returnez dans votre fenêtre de terminal et tapez: wget copiez_ici_le_lien_de_la_page_de_téléchargement_de_LuxRender

EC2 tut Terminal 2.png

Tapez: ls

Imprimez la commande ls de la liste de tous les fichiers & répertoires dans le répertoire courant où vous êtes.

Vous allez maintenant être en mesure de voir le fichier .tar que vous avez téléchargé. Nous devons le décompresser maintenant.
Tapez: tar -xf nom_du_fichier

Type: ls

Vous devez maintenant être en mesure de voir le répertoire qui a été créé lorsque le package a été décompressé.

Vous devrez enlever le fichier tar, et ensuite renommer le dossier pour LuxRender.

Tapez: rm nom_fichier_tar

Tapez: mv current/lux/directory/nom_fichier nouveau_nom

rm est l'abréviation de "remove", et mv est l'abréviation de "move". La commande mv peut être utilisée pour déplacer un répertoire ou un fichier d'un emplacement à un autre. Si aucun chemin est spécifié, alors la commande renommera tout simplement le fichier ou le répertoire.

En tapant: ls à nouveau, vous vous rendrez compte que le tar a été supprimé et le répertoire d'origine de LuxRender a été renommé.

EC2 tut Terminal 3.png

Votre exemple est maintenant configuré pour le rendu. A ce stade, prenez un instantané de l'instance pour que vous puissiez lancer de nouvelles instances de ce type de réglages. Cela vous économisera la peine d'installer des dépendances et LuxRender sur les instances que vous lancerez à l'avenir.

Sélectionnez l'instance sur votre tableau de bord d'instances et accédez au menu déroulant des Actions d'instance et sélectionnez Créer une image EBS AMI. EBS AMI sont les initiales de Elastic Block Store Amazon Machine Image. Vous serez invité à donner un nom à l'image et une description. Il est utile de nommer l'image avec la date ainsi que la version de LuxRender que vous avez installé. Il faudra plusieurs minutes pour que l'image soit créée, au cours de laquelle vous serez expulsé de l'instance. Vous serez en mesure de vous y connecter à nouveau une fois que l'instance aura été créé.

EC2 tut Instance 1.png



Se connecter à votre instance en utilisant PuTTY

1. Start PuTTY.

2. Under 'Session' enter 'root@<hostname> (for example: 'root@ec2-204-236-179-219.us-west-1.compute.amazonaws.com') or root@<ip_address>. I type 'root@' and then paste the hostname after it so as to prevent typos.

3. In the left-hand panel click 'Connection', then 'SSH', and select 'Auth'. The PuTTY Configuration dialog box appears. Click Browse, and select the PuTTY private key file you generated and named 'id_rsa-gsg-keypair.ppk' if you followed the instructions in the 'Appendix: Putty' section.

4. Click 'Open' to connect to your Amazon EC2 instance.

5. You may get a Putty Security Alert about a host key not being cached in the registry. Click Yes.

6. You should be presented with a terminal screen welcoming you to a Ubuntu session and ending with a command prompt that looks something like 'root@ip-<ip address> #' See screenshot image below.

EC2 tut 3.jpg

NOTE: I have had it occur that the instance starts without Ubunu loading! Instead of the above you will get a command prompt that looks something like 'root@ip- #' (no IP address) and no Ubuntu welcome message. If that occurs, you may have to reboot the instance. Click on 'Instances' in the left hand column, select your new instance, RMB then select Reboot. After a few minutes restart your Putty session and confirm a correct Ubuntu start message and prompt.



Envoyer des fichiers sur votre instance pour un rendu

Before uploading your LuxRender Scene File, be sure to copy any image textures that you are using in the folder that contains the .lxs file, as LuxRender will automatically look in this directory for any images that are required to render the scene. Then zip up the folder using whatever type of compression method you prefer.

There are many ways to get files into and back out of your instance, but using the scp command (Secure CoPy, which is a component of ssh) is the easiest and doesn't require uploading the files to a web server, or installing an ftp server on your instance.

In a new Terminal window type: scp -i path_to_keypair/keypair.pem source_file_location/file ubuntu@public_DNS:/path_to_copy_file_to

EC2 tut Terminal 4.png

Once the file has been finished uploading from your local computer to the instance, you can close the scp terminal window, and switch back to the Terminal window in which you have logged into your instance, and type the ls command inside of the directory you copied the file to. You should now be able to see the zip file.

Now unzip the file by typing: unzip file.zip

Type: ls to see the directory of the unzipped file

If you wish to, you can remove the zip file by typing: rm file.zip

EC2 tut Terminal 5.png



Faire le rendu d'un fichier

Once you have uploaded a scene file to the instance, you can start rendering it.

Type: pwd

This will print the working directory, which will show where you are at in the computer. Now change directories into the LuxRender directory using the "cd" command.

Type: cd LuxRender

Typing ls inside of the LuxRender directory will print a list of all the directories and files inside of the current directory.

Now we are going to render the lux scene file that we have uploaded to our instance. While still in the LuxRender directory, type: ./luxconsole path_to_Lux_Scene_File

The ./ command execute the file following this command located in the current directory.

EC2 tut Terminal Render.png

Your file should now be rendering, and the terminal window should have an output identical to the Log Tab in the LuxRender GUI.



Récupérer l'image rendue sur votre instance

You can transfer the rendered image back to your machine with the same scp command that we used to upload the scene file to the instance, but this time it is used in reverse.

In a new terminal window type: scp -i path_to_keypair/keypair.pem ubuntu@public_dns:/path_to_source_file/filename.png /path_to_copy_to_on_local_machine/

EC2 tut Terminal Download.png

You can transfer any file in this way: png, tiff, flm, etc.

Once the file transfer has finished, you can close the Terminal window.



Coût de rendu des types d'instance

The different instance types that Amazon offers in the EC2 have drastically different amounts of computational power. As there is also a wide range of costs for the different instance types, I decided to render the two example scenes that shipped with LuxRender0.8, LuxTime_by_freejack and SchoolCorridor_by_BYOB, each for one hour. The results from the two scenes were averaged together and plotted below. Most of the high CPU instance types, with the exception of the cc2.8xlarge, give relatively good value in regards to computational power per dollar. However, if you are rendering a scene that requires more ram than is available with a high CPU instance type, you can use a high memory instance type, but at a lower value in samples per pixel per dollar.

Relative Render Power (Samples Per Pixel Per Hour) across instance types, as of May 2012.

 

Relative Cost Value (Samples Per Pixel Per Dollar) across instance types, as of May 2012.

Though not a fair comparison, this data was extrapolated to help users know what amount of rendering power can be expected.


kC/s Date Application Version Matériel OS Auteur
1410.00 2012/5 Luxrender 1.0 RC1 cc2.8xlarge Ubuntu 12.04 Server Jack Eden
1170.00 2012/03/26 Luxrender (OpenCL) 0.8 official release Intel i7-3930k, 4.7 Ghz Sabayon Linux 8.0 (Gentoo kernel 3.2-r12) Nofew
1130.00 2012/01/11 luxconsole (no OpenCL) 0.8 official release Super Micro (6016GT-TF-FM209) 12 core with dual NVIDIA M2090 Tesla CentOS Linux 6.1 x86-64 rovitotv
1053.00 2012/5 Luxrender 1.0 RC1 cc1.4xlarge Ubuntu 12.04 Server Jack Eden
771.29 2011/07/05 luxrender.exe 0.8 official release i7 2600k, 4,6 GHz Windows Vista 64 callistratis
750.00 2012/5 Luxrender 1.0 RC1 m2.4xlarge Ubuntu 12.04 Server Jack Eden
731.20 2011/06/27 luxrender.exe 0.8 official release i7-970, 3,2 GHz Windows 7 64 Abel
662.03 2011/06/13 luxconsole (no OpenCL) 0.8 official release Phenom X6 1090T, 4.0 GHz Gentoo Linux x86-64 SATtva
595.83 2011/06/13 luxconsole (no OpenCL) 0.8 official release Intel Core i7 860, 3.62 GHz Ubuntu Linux 11.04 x86-64 LadeHeria
537.00 2012/5 Luxrender 1.0 RC1 c1.xlarge Ubuntu 12.04 Server Jack Eden
527.88 2011/06/13 luxconsole (no OpenCL) 0.8 official release Phenom X6 1090T, 3.2 GHz Ubuntu 10.04 64 Abel
507.57 2011/06/13 LuxRender.app 0.8 official release Xeon W3530 2.8Ghz OS X 10.6.7 JtheNinja
507.12 2012/01/29 luxconsole 0.8 official release 2,66 GHz Quad-Core Intel Xeon Mac OS X Lion 10.7.2 (11C74) Decamino
504.70 2011/06/13 luxrender (no OpenCL) 0.8 official release Phenom X6 1090T, 3.2 GHz Ubuntu 10.04 64 Abel
486.29 2011/06/19 luxrender.exe 0.8 official release Xeon x3470, 2.93 GHz Windows 7 64 Abel
449.16 2011/07/21 luxrender.exe 0.8 official release Intel Core i5 2500k, 3.3 GHz Windows 7 64 twilight76
430.58 2011/06/13 luxrender.exe 0.8 official release Intel Core i7 920, 2.67 GHz Windows 7 64 moure
395.81 2011/06/27 luxrender 0.8 official release Phenom II X6 1055t, 2.8 GHz Ubuntu 11.04 64bit B.Y.O.B.
385.00 2012/5 Luxrender 1.0 RC1 m2.2xlarge Ubuntu 12.04 Server Jack Eden
330.02 2011/08/06 LuxRender.app 0.8 official release Intel Core2 Quad Q9550@ 2.83GHz OSX 10.7.0 Eros
294.88 2011/07/09 luxrender.exe 0.8 official release Intel Core2 Quad Q9550@ 2.83GHz Scientific Linux 6 64bit Eros
293.00 2012/5 Luxrender 1.0 RC1 m1.xlarge Ubuntu 12.04 Server Jack Eden
269.79 2011/06/23 luxrender 0.8 official release Intel Core i7-720QM, 1.60 GHz Ubuntu Linux 11.04 x64 gumtree
257.73 2011/06/23 luxrender.exe 0.8 official release Intel Core i7-720QM, 1.60 GHz Windows 7 64 gumtree
194.00 2012/5 Luxrender 1.0 RC1 m2.xlarge Ubuntu 12.04 Server Jack Eden
144.71 2011/06/22 LuxRender.app 0.8 official release Core2Duo 2.53GHz OS X 10.6.7 Eros
140.24 2011/06/22 luxrender.exe 0.8 official release Core2Duo 3.0GHz (E8400) Windows 7 64 edna
136.00 2012/5 Luxrender 1.0 RC1 c1.medium Ubuntu 12.04 Server Jack Eden
112.00 2012/5 Luxrender 1.0 RC1 m1.large Ubuntu 12.04 Server Jack Eden
72.70 2011/06/13 luxrender.exe 0.8 official release Core2Duo 2.4 GHz (P8600) Windows XP Abel
68.53 2011/07/13 luxrender (no OpenCL) 0.8 official release Pentium SU4100, 1.3GHz Ubuntu 11.04 32bit B.Y.O.B.
64.00 2012/5 Luxrender 1.0 RC1 m1.medium Ubuntu 12.04 Server Jack Eden
33.00 2012/5 Luxrender 1.0 RC1 m1.small Ubuntu 12.04 Server Jack Eden
15.00 2012/5 Luxrender 1.0 RC1 t1.micro Ubuntu 12.04 Server Jack Eden



Réduire les coûts de rendu en gérant les spots d'instances

You can bid on Amazon EC2's unused capacity by using Spot Instances. Spot Instances are generally cheaper than on-demand instances. Spot prices fluctuate with demand, so the only way to see the current spot price of an instance type is to log into your EC2 console, go to the Spot Request Page, and then click on Pricing History.

EC2 tut SpotPrice.png

Requesting a Spot Instance works the same way as launching an on-demand instance, except that you specify a maximum spot price for the number of instances you wish to launch. Even if your initial bid is higher than the current spot price, your spot instance request is usually not fulfilled immediately. Spot Instance can take from several minutes to an hour or two to become available depending on availability.

Once your Spot Instance launches, your instance will stay active as long as your bid remains higher than the current spot price. If the spot price rises above your maximum bid price, your instance will be terminated, and all files stored locally on the instance will be lost.

There are two ways to gain persistent storage so that you will not lose your rendered files in the event your instance is terminated. These are Amazon's S3 service and EBS Volumes.



Archivage persistant en utilisant un volume S3

Amazon's S3 (Scalable Simple Storage) is an easy way to add persistent storage to your instance. One other option is creating and connecting an Elastic Block Store, or EBS, to the instance. An advantage of S3 over EBS is that files can be uploaded and downloaded or previewed in the AWS Console in your web browser. This allows you to easily monitor the progress of a render from a web interface, as you will be able to quickly look at the png files that luxconsole writes to the specified interval in your scene file. Another advantage of S3 over EBS is that the same S3 bucket can be simultaneously mounted to multiple instances, unlike an EBS volume which can only be concurrently mounted to a single instance. Of course you must be sure not to read/write to the same file from two different instances.

Although cost effective, be sure to familiarize yourself with the costs associated with using S3.

In order to connect an S3 bucket to an instance, some files must be installed and a password file created. It is best to perform these task on an clean instance but which you have already set up prior dependancies on, because you will want to take a snapshot of the instance when you are done, so that any instances that are spun up in the future will have the S3 dependencies installed on them.

Prior to using Amazon EC2, I personally had never worked in a command-line environment. One of the purposes of this tutorial is to familiarize other artists who may be unfamiliar with the command line with commands to get things done.

First we are going to make sure apt-get is up to date. Type: sudo apt-get update

Next we are going to download and install the required libraries and dependencies for s3. Type: sudo apt-get install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support make

The package files will be read and then you will be asked if you wish to continue. Type: y

Now download the s3fs code from google with: wget http://s3fs.googlecode.com/files/s3fs-1.61.tar.gz

Now unpack the file with: tar xvzf s3fs-1.61.tar.gz

Now we are going to change directories into the directory that was created by unpacking the tar. Type: cd s3fs-1.61/

Then type: ./configure

Then type: make

Then type: make install

Now we are going to create a file in the /etc/ directory. Type: sudo touch /etc/passwd-s3fs

Now change directories up to the /etc/ directory which is a top level directory. Type: cd ..

This will move you up directories. Typing: cd ../.. allows you to move more than one level at a time.

Once at the top directory level, type: cd etc to move into the etc directory.

If you type: ls a list of all files in the directory will be displayed. You should see the file passwd-s3fs this is the file that was created with the touch command. We are going to edit this file with the nano command. Type: sudo nano passwd-s3fs

You need to copy and paste your access key and secret access key into this file. First you need to get your access and secret access keys. When you are logged into the AWS management console, near the upper-right hand corner of the screen you should see My Account/Console as a drop-down menu. Inside of this menu is a Security Credentials line, click on it. This will take you to a page that will show you your access and secret access keys.

Inside of the passwd-s3fs file, paste the access and secret access keys in the format of: access_key:secret_access_key making sure there are no spaces or carriage returns. Then save the file and exit out with ^X

We need to make the passwd-s3fs file private. Type: sudo chmod 640 /etc/passwd-s3fs

Now you need to make a directory to which the S3 Bucket can be mounted to. I name mine s3 and placed it in the /mnt/ directory. Type: sudo mkdir /mnt/s3

An S3 bucket can be mounted to any empty directory, e.g. you could create a directory in /home/ubuntu named mount and thus the mount point would be /home/ubuntu/mount. Note that the bucket_name will never be used in any file hierarchy once the bucket is mounted.

Your instance is now ready to mount an S3 Bucket. If you have not yet created a Bucket, go to the S3 tab in your AWS console, and create a Bucket. When you have a Bucket created, you can mount it to your instance with the command: sudo s3fs bucket_name /mnt/s3

If you have created your mount-point directory in a different place or with a different name, you will need to change the above command to suit.

There are very specific S3 Bucket naming conventions. Each Bucket in the global S3 system must have a unique bucket_name, so two users of the Amazon S3 system could not each have a Bucket with the same bucket_name. See the Amazon S3 Best Practices article to learn more about this.

If your S3 Bucket mounted successfully, now is a good time to unmount the bucket and take a snapshot of the instance so that you can spin up instances in the future that are pre-configured to mount S3 buckets.

You can unmount a bucket by typing: sudo umount /mnt/s3

Again, if you have created your mount-point directory in a different place or with a different name, you will need to change the above command to suit.

You should now be able to cd into your mount directory and see any files you have placed in your S3 bucket through the AWS S3 web interface.

Note: the author of this tutorial has yet to learn how to mount a bucket as a non-root user. In order to change directories into the s3 bucket, you must be the root user. Become the root user by typing: sudo bash. You can exit out of the root user by typing: exit. When calling luxconsole and pointing luxconsole to render a scene file in an s3 bucket, you must either be the root user, or the sudo command must be used in order for luxconsole to be able to read from and write back to the bucket, e.g.: sudo luxconsole /mnt/s3/lux_scene_file.lxs. If another user of this wiki can figure out how to allow non-root users to mount and read-write to buckets, please add these detailed instructions to this wiki page.

You can now render files in your S3 Bucket by directing luxconsole to them.

You can now use spot instances that have an S3 bucket mounted to them, and in the case your instance is terminated due to the spot price increasing above your bid price, you will not lose your rendered work.



Extraire des fichiers larges d'un volume S3

It has been the experience of the author of this tutorial that using the Amazon Management Console S3 web interface has historically been very slow for downloading large files. If you are experiencing slow download speeds, one method to consider for moving large files from your S3 bucket to your local machine is to zip up the files from an EC2 instance while the bucket is connected to the instance, then use the copy or cp command to copy the file or zipped files from the Bucket into a local directory in the instance and use the scp command, explained above, to move the files from the instance to your local machine. Note that the move or mv command should be used with caution, or avoided altogether, as the files will be completely moved from the S3 bucket to the instance, and in the case the instance is inadvertently terminated, or, if using spot instances, terminated due to an increase in the spot price, your data will be lost.



Archivage persistant en utilisant des volumes EBS

An alternative to using S3 is to create and attach an EBS volume to an instance. If your render files are located inside an EBS volume and the instance is inadvertantly terminated, your files will persist. However, to access them, you must spin up a new instance and connect the EBS volume to the newly spun-up instance.

To use EBS Volumes, you must first create a new volume.

Inside of the Amazon EC2 console, click on Volumes under ELASTIC BLOCK STORE. Then click on Create Volume. A hovering window will pop up and allow you to input the size of the volume, its availability zone, as well as any snapshots you wish to use that have pre-loaded data in them. We want to create a blank volume that we can fill with render files, so choose No Snapshot.

After you have spun up your instance, take note of the instance name/number that Amazon assigns to it. Inside of the Volumes window in the Console, right-click on the EBS volume that you want to mount to your instance and choose Attach Volume and select the instance from the drop down list.

Once you ssh into the instance cd into the /dev/ directory to confirm that you can see the attached volume xvdf. First we need to format the volume. However, if the volume contains data that you need to access, do not format, as formatting will erase all data in the volume. If you just need to mount the volume, skip this step.

Format the volume with the command: sudo mkfs -t ext3 /dev/xvdf

Next you need to create a directory to mount the volume to. Type: sudo mkdir /mnt/data-store

Note: You can name the mount point whatever you want. Instead of data-store, you could name it LuxScenes.

And finally mount the volume by typing the command: sudo mount /dev/xvdf /mnt/data-store

Any files that you place inside of the directories located within the mount directory will still exist on the EBS volume in the event that the instance is terminated.



Rendu en réseau avec le service EC2 d'Amazon

Although it is possible to network EC2 instances to your local machine to act as server nodes for rendering, this practice is highly discouraged, as the internet connection to your home or office is usually not stable enough to complete long renders for more than a few hours. Besides this, there will usually be a speed bottleneck between your EC2 instances and your local machine, as the FILM files that are passed from the server nodes to the master are quite large.

If you need the expanded computing power that can be gained by networking multiple machines together, it is recommended that you spin-up multiple EC2 instances, SSH into each instance, set one of them to be the master and the remaining instances to be slaves, and do all your network rendering locally inside of the EC2 network. If you wish to use your local machine(s) along with EC2 instances to render a single image, it is recommended that you merge the FILM file from EC2 with the FILM file from your local machine(s) once the desired SPP has be achieved.


Retour à la page Index du Wiki