AWS, Azure, DevOps & Cloud Solutions, Project Management

Continuous Deployment in Multiple Environments using Packer

Packer is an image creation tool, used for automating the process of image creation in various On-Prem and Cloud Environments.

Packer came as a great aid for websites whose Production environment will be hosted on Cloud and Dev environments will be hosted on-prem. It ensures that Dev and Prod environments are as similar as possible. And if any time we had to set the entire infrastructure up in another account or location we don’t have to go through the painful process of installations and configurations all over again.

With Packer, we will able to set up the instances and install the required tools and software in minutes rather than hours.

With the following steps, you’ll be able to build and deploy images in both AWS and Azure Cloud Environments.

Installation of Packer –

  1. Download the proper package from the Packer download

    $ wget
  2. Unzip the package.
    $ unzip
  3. Set the environment path
    $ vim ~/.bash_profile
    $ Append ‘:/path/to/packer’ at the end of $PATH variable
    $ exec $SHELL
  4. Verify the installation

    $ packer


    Usage: packer [--version] [--help] <command> [<args>]
    Available commands are:
        build       build image(s) from template
        fix         fixes templates from old versions of packer
        inspect     see components of a template
        validate    check that a template is valid
        version     Prints the Packer version

    Note – If you face any issue as permission denied, try changing the executable packer file to some other name – preferably

Template –

Now that the Packer is installed, we need to build an image. Packer uses JSON templates, and this template has three important parts – Variables, Builders and Provisioners.

Variables – This is the block where we define the custom variables.
Builders – This is the block where we define all the required AMI Parameters.
Provisioners – This is the block where we specify what tools or services we need to install before the AMI gets created.

A basic template will have the skeleton form –

   “variables”: {...},
   “builders”: [...],
   “provisioners”: [...],

Example Template to create a Red Hat instance with Java, Nginx, Node, MySQL and Firewall pre-installed on AWS EC2 instance –

  "variables": {
       "aws_access_key": "",
       "aws_secret_key": ""
 "builders": [{
     "type": "amazon-ebs",
     "access_key": "{{user `aws_access_key`}}",
     "secret_key": "{{user `aws_secret_key`}}",
     "region": "us-east-1",
     "source_ami": "ami-011b3ccf1bd6db744",
     "instance_type": "t2.micro",
     "ssh_username": "ec2-user",
     "ami_name": "test-ami {{timestamp}}"
"provisioners": [{
    "type": "shell",
    "inline": [
        "sleep 30",
        "sudo yum install wget -y",
        "sudo yum install java-1.8.0 java-1.8.0-openjdk-devel -y",
        "sudo rpm -ivh",
        "sudo yum install nginx -y",
        "sudo yum install -y gcc-c++ make -y",
        "curl -sL | sudo -E bash -",
        "sudo yum install nodejs -y",
        "wget ''",
        "sudo rpm -ivh 'mysql80-community-release-el7-1.noarch.rpm'",
        "sudo yum install mysql-server -y",
        "sudo yum install firewalld -y"


All this template does is create a t2-micro AWS instance with the specified ami-id in the region us-east-1. In the provisioners’ section, we halt the process for 30 seconds, giving enough time for the EC2 instance to initialize and then we specify what tools or software needs to be installed before it takes the Image of that instance.

Build an image –

Now that we have our template ready, we have to inspect and validate the syntax and configuration of the template.

This command checks the syntax of the template.

$ packer inspect packer.json

This command checks the configuration of the template.

$ packer validate packer.json

After the inspection and validation are successful, we have to build the Packer template.

$ packer build packer.json

We can also capture the output in a nohup.out file and run this as a background process using ‘&’

Deploying on AWS –

AWS provides us with a flexible means of providing credentials for authentication. Credentials are usually provided as access id and secret key along with region and type. These look like

    "access_key": "AKIAIOSFODNN7EXAMPLE",
    "secret_key": "wJalrXUtnXXXX/K7MXXXX/bPxRfiCYEXAMPLEKEY",
    "region": "us-east-1",
    "type": "amazon-ebs"

Static credentials –
Credentials are provided via the command line.

$ packer build \
-var 'aws_access_key=<YOUR ACCESS KEY>' \
-var 'aws_secret_key=<YOUR SECRET KEY>' \

Environment Variables –
The second method of passing credentials is to set them as environment variables. Note that when we use this method of passing credentials, the environment variables will override the use of AWS Credentials file.

$ export AWS_DEFAULT_REGION="us-west-2"
$ packer build packer.json

Credentials File –
You can use AWS Credentials file to specify the credentials. The default location of the credentials file is $HOME/.aws/credentials on Linux and\credentials in Windows. If Packer fails to detect credentials in in-line or in environment variables, it will check in this location. We can also specify a different location for our credentials file by setting the environment variable AWS_SHARED_CREDENTIALS_FILE. The format of the credentials file is

 aws_access_key_id=< YOUR ACCESS KEY >
 aws_secret_access_key=< YOUR SECRET KEY >

IAM Task or Instance Role –
Packer will use credentials provided by the task’s or instance’s IAM role. This is a preferred approach as you don’t have to hard-code any credentials. You can change the IAM Role Policy as per your requirements, but the following piece of policy document provides the minimal set of permissions for Packer to function smoothly.


  "Version": "2012-10-17",
  "Statement": [{
      "Effect": "Allow",
      "Action" : [
      "Resource" : "*"

Packer for Azure –

Packer can build and create images in Azure as well. For that, we can either use Azure CLI to get the required information or we can use the Azure UI.

If you want to use Azure CLI, make sure it is installed. Following steps provides you how to download Azure CLI in your server.

Steps to install Azure CLI

  1. Import the Microsoft repository key
    $ sudo rpm --import
  2. Create local azure-cli repository information
    sudo sh -c 'echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=\nenabled=1\ngpgcheck=1\ngpgkey=" > /etc/yum.repos.d/azure-cli.repo'
  3. Now you can install the cli with a yum-install command
    $ sudo yum install azure-cli
  4. Check the installation
    $ az login

If the CLI has the permission to open the browser, it will do so, and you have to login, or else you need to open the browser page and follow the instructions.

Once the Azure CLI is installed, create a ResourceGroup with the following command.

$ az group create -n <resourceGroupName> -l eastus

Next, we need to get the Azure Credentials like Client ID, Tenant ID and Client Secret. We can get those with the following command

$ az ad sp create-for-rbac --query "{ client_id: appId, client_secret: password, tenant_id: tenant }"

Next, we need the Azure subscription id

$ az account show --query "{ subscription_id: id }"

After getting the required credentials, we can now pass them in the builders’ section of the template.

"builders": [{
"type": "azure-arm",
"client_id": "<YOUR CLIENT ID>",
"client_secret": "<YOUR CLIENT SECRET>",
"tenant_id": "<TENANT ID>",
"subscription_id": "<SUBSCRIPTION ID>",

Example Template –

An example template which will build images with similar kind of installations and configurations across Azure and AWS environments –

"variables": {
"aws_access_key": "",
"aws_secret_key": ""
"builders": [{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"region": "us-east-1",
"source_ami": "ami-011b3ccf1bd6db744",
"instance_type": "t2.micro",
"ssh_username": "ec2-user",
"ami_name": "<your-own-ami-name> {{timestamp}}"
"type": "azure-arm",
"client_id": "<YOUR CLIENT ID>",
"client_secret": "<YOUR CLIENT SECRET>",
"tenant_id": "<TENANT ID>",
"subscription_id": "<SUBSCRIPTION ID>",
"managed_image_resource_group_name": "<NEW RESOURCEGROUP NAME>",
"managed_image_name": "<CLI CREATED RESOURCEGROUP NAME>",
"os_type": "Linux",
"image_publisher": "RedHat",
"image_offer": "RHEL",
"image_sku": "7.4",
"azure_tags": {
&nbsp; &nbsp; "dept": "DevOps",
&nbsp; &nbsp; &nbsp; "task": "Image"
"location": "Central US",
"vm_size": "Standard_A0"
"provisioners": [{
"type": "shell",
"script": "/path/to/the/installation/script"
"execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh &nbsp;'{{ .Path }}'",
"type": "shell",
"script": "/path/to/the/installation/script",
"inline_shebang": "/bin/sh -x"

Now we’ll create a Jenkins Job to simulate the build

Create a new Jenkins ‘Freestyle Project’ job and name it accordingly.

Configuration – 

Add the link to your git repository


In the Post-build steps, add the commands which will run the packer in your control server

Conclusion – Packer is a great tool for automating image creation in multiple environments, by configuring a Jenkins job to build images and trigger the builds.

About The Author