According to the Gartner MQ 2015 report, Amazon with its Amazon Web Services is currently the market leader provider of IaaS services. Furthermore, it can now boast well known “unicorns” (e.g.: Airbnb, Netflix, Slack, Pinterest) among its clients. Apart from these success stories, relying on Amazon Web Services is also becoming a popular choice for small, innovative, high-tech businesses.

Different AWS tools aim to address the deployment automation problem. Here, I want to focus on AWS CodeDeploy. AWS CodeDeploy lets you to easily deploy applications from S3 buckets (or GitHub repositories) to EC2 instances. It’s a really easy-to-use tool but, it involves the execution of repetitive manual actions.

Let’s say we have a Java application built using Gradle. And, let’s say we want to use AWS CodeDeploy to deploy our application JAR from a S3 bucket to a EC2 instance. In order to accomplish this objective, we have to:

  1. build the application using Gradle, thus obtaining the JAR file
  2. move the JAR file into our deployment bucket on S3
  3. sign-in to the AWS Management Console
  4. open the AWS CodeDeploy console on the Deployments section
  5. choose “Create New Deployment”
  6. in the “Create New Deployment” section, insert:

    • the application name
    • the deployment group
    • the revision type
    • the revision location
    • the deployment configuration
  7. click “Deploy Now”.

Each time we have to deploy our application, we should follow these steps in which either we do mechanical actions (e.g.: moving a file from our local development environment to the S3 deployment bucket) or we insert information known in advance. Think about having the test and staging environments running on EC2 instances. It’s very likely that performing tests in those stages will flush out bugs, thus implying a code review. In turn, any code changes will require new tests, thus entailing the deployment of the code to the test/staging deployment groups again, and again. Therefore, reducing the amount of “boilerplate actions” that should be performed to accomplish the deployment can effectively streamline the overall develop-test-and-deploy process.

Since we’re using Gradle to build our application, can we use it to create tasks that simplify our lives?

The answer is definitely yes. We can code Gradle tasks providing the capability to automatically perform specific kind of actions but, in order to do so, we need a tool for interacting with AWS services through scripts.

Does Amazon provide any tool serving our ends?

Again, the answer is yes. The tool we’re looking for is the AWS Command Line Interface. The AWS CLI is shipped as a python package (so it can be installed via pip) and it provides a complete series of commands devoted to manage, drive and control AWS services.

What’s the main idea, then?

Gradle gives the chance to execute shell scripts, thus it’s possible to write very simple bash scripts that performs the mechanical actions we wouldn’t like to do anymore and having them executed by Gradle.

Note: If we want to automate the deployment process in this way, the AWS CLI installation and configuration into our system is a prerequisite. Please refer to the Amazon guide to have your environment properly set up.

Let’s start! We want to build our distribution and move it to the right S3 bucket.

First of all, we need to write a Gradle task that produces the distribution file. The logic applied to create the distribution may vary from case to case so, I don’t want to go deeper into this implementation. Long short story, let’s say we have a Gradle task named buildDistribution that accomplishes the purpose of making a package of our application with the AWS CodeDeploy scripts bundled inside.

Now, we can write a Gradle task that dependsOn buildDistribution and moves the distribution created by the buildDistribution task to the right S3 bucket.

1
2
3
4
5
6
7
8
9
10
11
12
13
// Creates a new bundle and copies it to S3
task buildAndCopyDistribution(type:Exec, dependsOn: [buildDistribution]) {
    // Defines the path into the project of the move.sh script
    def move = "automation/move.sh"
    // Defines the distribution name based on the jar section
    def distName = "${jar.baseName}-${jar.version}.zip"
    // Defines the path into the project of the distribution file
    def localDist = "$buildDir/distributions/${distName}"

    // Moves the distribution file to the right S3 bucket
    executable "bash"
    args "-c", "sh ${move} ${localDist}"
}

The task definition is pretty straightforward. First of all, the task is defined to be an Exec task, meaning that it’s a task that “executes a command line process” and, of course, it depends on a previous execution of the buildDistribution task. Then, the task defines two important variables:

  • the move variable contains the path to the move.sh script. The reference folder for the path is the one in which the build.gradle file is contained. The move.sh script will perform the actual move of the file to the S3 bucket.
  • the localDist variable contains the path to the distribution file. Again, the reference folder is the one in which the build.gradle file is contained.

Finally, the task executes the move.sh bash script with the localDist variable as input.

What does the move.sh script actually do?

Let’s see it directly.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash

AWS="/usr/local/bin/aws"

APPLICATION_NAME="cn-app"

BUCKET_NAME="coding-nights-releases"
FULL_BUCKET="s3://${BUCKET_NAME}/${APPLICATION_NAME}/"
SOURCE_FILE=$1

echo "Application Name: ${APPLICATION_NAME}"
echo "Full Bucket: ${FULL_BUCKET}"
echo "Source file: ${SOURCE_FILE}"

# Copies the distribution to the right S3 Bucket
$AWS s3 cp ${SOURCE_FILE} ${FULL_BUCKET}

# Checks the result of the copy
if [ $? -eq 0 ]; then
    echo "Copy successful!"
else
    echo "Copy Failed!"
fi

Basically, the script defines some well known parameters like, the application name and the release bucket name. By merging these two constants, the full S3 bucket path is obtained. It isn’t required to build such S3 path, it essentially depends on the standard you chose to organize and manage your S3 buckets. Furthermore, you can of course decide to define the variable FULL_BUCKET only. Once the script has acquired the bucket location and the location on your environment of the distribution file, it executes the AWS CLI command devoted to move a file to an s3 bucket and, finally, it checks the result and returns.

Summing up, by executing the task buildAndCopyDistribution we package a new distribution for our application and we move it to the right S3 bucket.

Now, how can we automate the deploy of our cn-app to the deployment group cn-app-test (representing our test environment)?

The approach is the same used for automating the transfer of the distribution bundle. So, we need to write a Gradle task that performs the deploy of our bundle to the test environment.

1
2
3
4
5
6
7
8
9
10
11
// Deploys a given revision to the test environment
task deployOnTest(type:Exec) {
    // Defines the path into the project of the deploy.sh script
    def deploy = "automation/deploy.sh"
    // Defines the distribution name based on the jar section
    def distName = "${jar.baseName}-${jar.deployVersion}.zip"

    // Deploys the given distribution to the test deployment group
    executable "bash"
    args "-c", "sh ${deploy} test ${distName}"
}

Again, we’re facing with a simple task definition. First, the task is defined to be an Exec task, exactly like the buildAndCopyDistribution task. Then, the task defines two important variables:

  • the deploy variable contains the path to the deploy.sh script. The reference folder for the path is the one in which the build.gradle file is contained. The deploy.sh script will perform the deploy of our bundle to the cn-app-test deployment group.
  • the distName variable contains the name of the distribution bundle.

Finally, the task executes the deploy.sh bash script by feeding it with the constant string “test”, which identifies the target deployment group, and with the name of the distribution bundle.

You may be wondering why the constant string is needed. Indeed, it isn’t. However, think about having more deployment groups each one representing a stage in the deployment process of the same application. In this case, the deploy.sh script is expected to perform the very same computations for every deployment group but we don’t want to cut and paste the same code changing only a constant string.

Finally, we’ve reached the end. What does the deploy.sh script actually do?

Let’s see it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

AWS="/usr/local/bin/aws"

APPLICATION_NAME="cn-app"
APPLICATION_ENV=$1
DEPLOYMENT_GROUP="cn-app-${APPLICATION_ENV}"
BUCKET_NAME="coding-nights-releases"
BUNDLE_NAME=$2

echo "Application Name: ${APPLICATION_NAME}"
echo "Application Environment: ${APPLICATION_ENV}"
echo "Deployment Group: ${DEPLOYMENT_GROUP}"
echo "Bucket Name: ${BUCKET_NAME}"
echo "Bundle Name: ${BUNDLE_NAME}"

$AWS s3api head-object --bucket ${BUCKET_NAME} --key "${APPLICATION_NAME}/${BUNDLE_NAME}"

# Checks whether the distribution bundle exists on the expected S3 bucket
if [ $? -eq 0 ]; then
    # If the bundle exists...
    # Deploys it to the right application/deployment group
    $AWS deploy create-deployment \
        --application-name ${APPLICATION_NAME} \
        --deployment-group-name ${DEPLOYMENT_GROUP} \
        --deployment-config-name CodeDeployDefault.OneAtATime \
        --s3-location bucket="${BUCKET_NAME}",bundleType="zip",key="${APPLICATION_NAME}/${BUNDLE_NAME}"

else
    # Returns an error message
    echo "Distribution not found!"
fi

The deploy.sh script allows to automate the deploy of the distribution bundle stored into a S3 bucket. The automation is accomplished following three main steps:

  • first, constant and input parameters are defined, collected and combined together in order to get the variables that really matter to the deployment process (the application name, the deployment group, the bucket name and the bundle name).
  • then, the existence of the distribution bundle into the expected S3 bucket is checked by executing the head-object AWS CLI command.
  • if the distribution bundle already exists, the deploy is accomplished by means of the create-deployment AWS CLI command otherwise, an error message is returned.

You may see that the create-deployment command requires a couple of further information, i.e. the bundle type and the deployment configuration. It’s up to you to decide which kind of bundle you want to distribute and which deployment flow you want to follow, exactly like it’s up to you to decide the naming conventions that should apply to the application name, the bucket name and the object keys.

Is that all?

Yes, it is! We’re done with the deployment automation! Now, in order to deploy our application to the test deployment group, we have to first run the buildAndCopyDistribution task and then run the deployOnTest task.

Last but not the least… Why having two Gradle tasks instead of only one?

You can of course assemble tasks so the the bundle you’re currently building is immediately deployed into your target group. However, sooner or later a bug will get out (yes, it’s sad, I know), and maybe a quick recover is even needed in that case. You cannot benefit of the deployment automation tools if you have only one macro-task which builds, moves and deploys the distribution package. So, at the end, having two tasks provides you with the chance to choose which distribution version should be shipped.