Automated Amazon EC2 Cloud deployments with openQRM on Debian

This HowTo is about how to manage Public- and Hybrid Cloud deployments with openQRM. As the deployment manager for Amazon EC2 and its API compatible derivatives (e.g Eucalyptus) openQRM is capable to fully automate Instance provisioning and to add additional value by attaching automated application deployment via Puppet, automated monitoring via Nagios and also highavailability on Infrastructure-Level to the providers cloud features. The whole workflow of Instance-deployment in openQRM is exactly the same as for local resources in the internal IT-environment.


  • One physical Server. Alternatively the installation can be also done within a Virtual Machine
  • at least 1 GB of Memory
  • at least 100 GB of Diskspace
  • optional VT (Virtualization Technology) enabled in the Systems BIOS so that the openQRM Server can run KVM Virtual Machines later

Install openQRM on Debian

  • Install a minimal Debian on a physical Server
  • Install and initialize openQRM

A detailed Howto about the above initial starting point is available at "Install openQRM on Debian"

For this howto we have used the same openQRM server as for the howto about 'Virtualization with KVM and openQRM on Wheezy'.
That means with this howto we are going to add functionality to an existing openQRM setup. This is to show that openQRM manages all different virtualization and deployment types seamlessly.

Actually this means you can use either use the "Install openQRM on Debian" or "Virtualization with KVM and openQRM on Debian" howto as starting point.

Set a custom Domain name

As the first step after the openQRM installation and initialization it is recommended to configure a custom domain name for the openQRM management network.
In this Use-Case the openQRM Server has the private Class C IP address based on the previous "Howto install openQRM on Debian". Since the openQRM management network is a private one any syntactically correct domain name can be used e.g. ''.
The default domain name pre-configured in the DNS plugin is "".

Best practice is to use the 'openqrm' commandline util to setup the domain name for the DNS plugin. Please login to the openQRM Server system and run the following command as 'root' in a terminal:

/usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v

The output of the above command will look like

root@debian:~# /usr/share/openqrm/bin/openqrm boot-service configure -n dns -a default -k OPENQRM_SERVER_DOMAIN -v
Setting up default Boot-Service Konfiguration of plugin dns

To (re)view the current configuration of the DNS plugin please run:

/usr/share/openqrm/bin/openqrm boot-service view -n dns -a default

Enabling Plugins

For this HowTo please enable and start the following plugins in the sequence below:

  • dns plugin - type Networking
  • dhcpd plugin - type Networking
  • tftpd plugin - type Networking
  • device-manager plugin - type Management
  • nfs-storage - type Storage
  • lvm-storage - type Storage
  • nagios3 - type Monitoring
  • puppet - type Deployment
  • sshterm plugin - type Management
  • hybrid-cloud - type Deployment

Hint: You can use the filter in the plugin list to find plugins by their type easily!

Install the latest Amazon EC2 Tools

Go to Plugins -> Deployment -> Hybrid-Cloud -> About

There you can find the URLs and informations about the latest Amazon EC2 API- and AMI-Tools.

Here the steps to install the Amazon EC2 Tools. Please SSH-login to the openQRM server as 'root' and run the following commands:

mkdir /usr/local/ec2
cp -r ec2-ami-tools-* /usr/local/ec2/
cp -r ec2-api-tools-* /usr/local/ec2/
apt-get update && apt-get install default-jdk

Please notice: The version numbers may be different when newer EC2 Tools gets available!

Then please add the following to the system wide profile /etc/profile

# EC2 Tools
export EC2_HOME=/usr/local/ec2
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/usr

The EC2 API- and AMI Tools are now installed and available in the system path.

Please notice: Now please logout of the openQRM server and re-login. This is to activate the new profile settings in the environment. After re-login please restart the openQRM server to also activate the profile in its environment by running:

/etc/init.d/openqrm restart

To re-check the configuration please run:

ec2-describe-regions -O [your-aws-access-key] -W [your-aws-secret-key]

The output of the above command looks like:

REGION    eu-west-1
REGION    sa-east-1
REGION    us-east-1
REGION    ap-northeast-1
REGION    us-west-2
REGION    us-west-1
REGION    ap-southeast-1
REGION    ap-southeast-2

Configure which Amazon EC2 regions to use

Best practice is to use the 'openqrm' commandline util to setup which Amazon regions to use for the hybrid-cloud plugin. Please login to the openQRM Server system and run the following command as 'root' in a terminal:

/usr/share/openqrm/bin/openqrm boot-service configure -n hybrid-cloud -a default -k OPENQRM_PLUGIN_HYBRID_CLOUD_REGIONS -v "eu-west-1, us-west-1"

To (re)view the current configuration of the Hybrid-Cloud plugin please run:

/usr/share/openqrm/bin/openqrm boot-service view -n hybrid-cloud -a default

Create a Hybrid-Cloud Account

Go to Plugins -> Deployment -> Hybrid-Cloud -> Actions and click on 'Add new Account'

Provide an account name and the AWS Access and Secret Key plus a description for the account. Then click on submit.

Adding the account is then checking to get access via the provided credentials. If configured correctly the account is added as seen below.

You can now easily access all kind of Amazon EC2 functionalities through the different action buttons.

Choose AMIs for deployment

Go to Plugins -> Deployment -> Hybrid-Cloud -> About

In the section 'Manage and automate public and private clouds' -> AMIs you can find a URL to some current Ubuntu AMIs. Please open the url and find an AMI of your choice in the region of your choice. For this howto will will use a 'Ubuntu 13.04 64bit' AMI named 'ami-23a9b057'.

Please notice: Those AMIs are updated frequently so the AMI name may change!

Now go to Plugins -> Deployment -> Hybrid-Cloud -> Actions -> AMI. This will list available AMIs in the selected region.

Click on the AMI filter 'U' button and find the AMI you have selected on the Ubuntu AMI page. Click on 'Add Image' for that AMI.

This creates a new available Image object in the openQRM server.

Create a custom auto-configuration script to the EC2 Instance on S3

The integration with Amazon EC2 in openQRM allows to attach a custom script to a starting Instance. The Instance is then running this script on system startup. This can be used in combination with the Puppet integration to fully pre-configure an Instance in EC2. The easiest way to create such a custom auto-configuration script is to use the S3 action in the account overview. This provides you with a File-Manager for S3 and allows to easily upload files to S3. Those files, if set to 'public-read' permission is directly available via http. As an example we create a small bash-script which actually just outputs some text to a file.

On your Desktop create a new file named '' with the following content:

echo "Here custom commands are running on instance startup" > /tmp/my-custom-auto-configure.log

Now go to Plugins -> Deployment -> Hybrid-Cloud -> Actions -> S3 and create a new S3 bucket.

Click on 'Files in bucket' to list the files in the bucket.

Click on 'Upload file' button to upload the custom '' script from your desktop.

Select the '' srcript from your desktop, set the File Permission to 'public-read' and submit.

The '' srcript got uploaded to the S3 bucket and is available via http. Please copy the URL of the uploaded script, we are going to paste the URL in the following 'Instance Add' dialog.

Pre-configure Nagios service checks

Now go to Plugins -> Monitoring -> Nagios3 -> Services. and login to the embedded Nagios server with the openqrm credentials.

Here the standard Nagios configuration after the Debian package is installed.

Now go to Plugins -> Monitoring -> Nagios3 -> Config -> Services. and click on 'Add new Service'

Select the http service (Port 80) and click on 'Submit'

Here the available Nagios Service check list after adding the http service

Here after adding some more service checks

Create a new Instance on Amazon EC2

Go to Datacenter -> Server. and click on 'Add a new Server'

Provide a name for the new server. Easiest is to use the 'Generate name' button.

In the Resource-Select please click on 'New Resource'

Here the list of available Resource types. Please select 'Cloud (localboot) Virtual Machine'

This forwards to the Hybrid-Cloud plugin actions. Please click on 'Instances'

The following screenshot shows the empty Instance list. Please click on 'Add Instance'

In the Instance-Add for please select the AMI, the availability zone, type, keypair and security group.

Hint: Keypairs can be managed via the 'Keypair' action, security groups can be managed with the 'Groups' action!

Click on submit when finished.

Creating a new Instance automatically creates a new resource in openQRM and forward back into the server wizard. Please select the just created new resource and click on 'Submit'

Next select the Image object created from the AMI before.

On this screen please click on 'Submit' to edit the Image parameter.

The Image-Edit form allows to set a custom password for the server.

Please notice: Normally SSH-Access to Amazon EC2 instances works only on behalf of a private- and public keypair. Amazon EC2 keypairs can be easily managed through the 'keypair' action. Anyway openQRM also allows to simply set a password in the Image-Edit section. Setting a password there automatically sets the password in the Instance AMI and enables to allow SSH-Login with password.

Here the final Server-Edit screen. Click on 'Submit' to save the server configuration.

Configure Puppet recipes for the EC2 Instance

Go again to Datacenter -> Server. and edit the just created server object.

Click on Deployment -> Puppet to add a custom Puppet recipe to the server.

Here we choose the included 'webserver' puppet recipe which automatically installs and starts apache.

The overview about the Puppet deployment configuration looks now like this:

Set up monitoring for the EC2 Instance

Go again to Datacenter -> Server and edit the just created server object. Click on Deployment -> Nagios3

In the Service-Edit form please select the 'http' service and click on 'Submit'

The overview about the Nagios check configuration looks now like this:

Starting the EC2 Instance

To start the configured Amazon EC2 simply start its server object in Datacenter -> Server. This will create and start the Instance on the Amazon Public Cloud, apply the Image password configuration, apply the puppet recipes, configure WebSSH and execute the custom auto-configuration script we have attached to the Instance from S3.

Go to Datacenter -> Server. Select the new created server object and start it

The server object is now activated and the deployment of the Amazon EC2 instances has started.

Here a screenshot of the Amazon EC2 console after we have started the deployment.

You can now use the 'ssh' button in the server list at Datacenter -> Server to easily login to the Instance.

Please notice: Your browser will warn because of a self-signed ssl certificate for the WebSSH login! Please accept to login.

A quick check that the webserver is up+running

Here a screenshot of the embedded Nagios console with the http service check activated.

Also please re-check /tmp/my-custom-auto-configure.log on the Instance to see you custom script got executed.

And here the Datacenter Dashboard after we have created the Amazon EC2 Instance

You can now fully automate your Amazon EC2 deployment with openQRM

Hope you enjoyed this Howto!

Add more functionalities to your openQRM Setup

To continue and further enhance your openQRM KVM Virtualization Setup there are several things to do:

  • Enable the highavailability plugin to automatically gain HA for your server
  • Enable the cloud plugin for a complete Self-Service deployment of your Server and Software stack to end-users
  • Enable further Virtualization plugins and integrate remote Virtulization hosts for a fully distributed Cloud environment
  • Enable further Storage and Deployment plugins to automatically provision your Virtualization Hosts and other physical systems
  • ... and more.