TECH NEWS

RDV Tech Agile Partner : AWS Cloudformation – Part 5

By Olivier Robert, a Senior Consultant and DevOps Engineer at Agile Partner.

March 17, 2021

We have splitted the infrastructure components in different stacks in part 4. Now we’ll deploy an nginx to (simulate an app). But before we do so. Let’s chat about it.

At this stage, the nginx will live in the private host (the EC2 in the private subnet). If, for whatever reason, that instance goes down, the nginx is gone.
It would be better if we could have some resilience. We could deploy 2 instances and load balance them. Sure! That would add some resilience. We could also use an AWS Auto Scaling group and have it take care of whatever might happen to the instances we launch (1 or 2 or more). If we distribute the instances in different availability zones, that would give us even more resilience. What are the chances for 2 availability zones to go bust at the same time? I don’t know actually. But I guess it’s pretty slim.
The Auto Scaling group can be configured to always keep a specific number of instances up and running. It can be configured to scale up or down according to metrics thresholds. A load balancer can be configured to automatically distribute traffic to an Auto Scaling group. I don’t want to go to deep into the details here because the stack configuration will show it all, but know that we are going to use Availability Zones, an Auto Scaling group and an application Load Balancer to get a good level of resilience. Elasticity will follow in Part 6.

Before you start playing with the automation, I suggest you read up on Auto Scaling and you get to the end of Part 4. Check out tag part4.3 in the Github repository if you have not done the steps to reach our current starting point.

First we need to create a Launch Configuration which is the template for our instance. The Auto Scaling group is going to use it to launch one or more EC2 instances.

It’s not that different from the EC2 setup we had in Part 4. We added the UserData section to install and start nginx when the EC2 instance boots up.

For the Auto Scaling group, we specify:

  • In which subnets instances are to be launched
  • How many instances we want launched
  • The minimum and maximum number of instances at any time
  • The Launch Configuration to use
  • The update policy: here if we update,a new Auto Scaling group is created, and when it is ready, the previous Auto Scaling group is removed

I suggest you follow using the Github repository as I will only refer to snippets from now on. Check out part5.1. Copy the app.yaml file to your S3 bucket and use the new master.yaml to update Cloudformation.
Here is the new Resources section:

Once the update is complete, the previous private host EC2 instance has been terminated and a new Auto Scaling group is live with exactly one instance.

Let’s verify we have an nginx running in our private host. And you will notice, … that failed. We will have to debug this. Luckily, the /var/log/cloud-init.log file is going to tell us what’s wrong.

 

Our userdata script /var/lib/cloud/instance/scripts/part-001 does not execute correctly. It does for good reason. We install the EPEL package and the nginx interactively. The command waits for us to say: ‘y’, yes, do it!

We need to modify the userdata script to install packages non interactively.

yum install -y epel-release yum install -y nginx

Re-upload the app.yaml file to S3 and update the master stack on Cloudformation with the same master file (“Use current template”).
Quick check on the private host to make sure the change is working as expected.

The devil is in the details. I hope you did try to debug this with me. Rest assured you will get in trouble. This is tag part5.2 for the lazy guys in the back of the room

What we need to do now is expose our “app” to the outside world via an AWS Application load balancer.

Without knowing anything about load balancers, I know one thing for sure. I will need a security group.

I can limit access with a CIDR and introduce a new parameter: ALBAccessCIDR. I will set the default value to 0.0.0.0/0 (the entire world). I have already added port 443 for TLS later down the road.

We will use an application load balancer, a listener (on . port 80) and a target group.

The listener from the loadbalancer will forward incoming requests (on port 80) to a target group (on port 80) which is linked to the Auto Scaling group, and thus to our instance(s).

Here is how the Target group is linked to the Auto Scaling group.

And here are the ALB resources.

 

 

We need to integrate the ALBAccessCIDR parameter in the master.yaml file.

And we need to change our private host security group to accept connections from the load balancer on port 80 (and 443 later).

Now we can upload our modified app.yaml file to the S3 bucket and update the master stack (Replace current template).

In the app stack Outputs, we have the DNS of the load balancer. We can test the connection.

We can see as well that the target group we created has one healthy instance from the Auto Scaling group.

Who puts an app out on plain HTTP today? Nobody! We won’t either. We will add a secure listener, a secure listerner certificate, and secure target group. When we are done, we will have an end to end secured connection.

But we need to change the nginx configuration as well. Right now, it’s bare bones, we need to, at least, add a vhost for plain and secure hostname based connections.

At this stage, we can switch to a pre-baked AMI. This will liberate us to script and install everything for each new instance the Auto Scaling group would launch for us. To prepare the new AMI, ssh into the private host and add the following nginx configuration.

Of course adapt the nginx configuration with you server name, upload and replace the certificate and the private key with yours.

Create an AMI in the AWS EC2 console.

 

 

 

Get the image ID

You will need a certificate: if you don’t have one, you can create one for free with LetsEncrypt. Google it, it’s very easy. I suggest using a docker container. Use whatever DNS name you want. I’m going for cfntestbob.agilepartner.net. You could use a DNS server and create a CNAME to the load balancer but I am just going to change my local /etc/hosts and use one of the load balancer’s IP to verify things are working. This is just to keep it in the realm of what you could do at home.

Replace the image ID of the Centos 7 AMI with your pre-baked AMI ID in the app.yaml file. And remove the userdata part we don’t need anymore.

Update the private host security group to allow connections from the load balancer on port 443 and add the secure web app target group to the Auto Scaling group or the target group will have no targets.

We need the certificate in 2 places: the private host (it’s already baked in our AMI) and the load balancer. Upload your certificate to ACM in the region you are currently using. We can then link the certificate to the load balancer with its ARN.

We need a certificate parameter in both the master.yml and the app.yaml file.

And we are ready to test. Checkout tag part5.4 if you want to verify your work or if you want to give it a run.

Note that we use the imported certificate ARN to update the parameter during the master stack update.

Time to test the end to end TLS connection.

We need to find an IP of the load balancer. Use the load balancer DNS name.

 

Use one of the IPs and put it in your /etc/hosts with your chose DNS name.

52.211.141.243 cfntestbob.agilepartner.net

Check out tag part5.4 to reach this point in the tutorial.

Join me in Part 6 where we do a few little adjustments for the final.

 

Watch video

In the same category