exanubes
Q&A

Application on ECS #1 Deploying ECS Fargate Application

In this article we will build and deploy a simple NestJS application on ECS Fargate. To do that we will upload a Docker Image to our ECR Repository. Then we’ll create a simple vpc where we will put our application. Next, we will go ahead and create ECS Cluster which will spin-up a Fargate Service from a Task Definition which will be using the aforementioned Docker Image. After all this, we will setup a Load Balancer that will be listening on port 80 and redirecting all traffic to a Target Group which will have our ECS Task instances automatically registered as targets of the Load Balancer.

You can find the finished code on Github

TOC

ECR Stack

First off, we need a repository for the Docker image of our application. We need to do this first because later ECS Service will be pulling this image in order to start a task.

export class EcrStack extends Stack {
  public readonly repository: IRepository;

  constructor(app: Construct, id: string, props: StackProps) {
    super(app, id, props);
    this.repository = new Repository(this, 'exanubes-repository', {
      imageScanOnPush: false,
      removalPolicy: RemovalPolicy.DESTROY,
    });
  }
}

All we do here is create a ECR Repo. The imageScanOnPush option would look for vulnerabilities in our image but we don’t need it right now so it’s turned off.

Now we can put it in the bin file…

const app = new cdk.App();
const ecr = new EcrStack(app, EcrStack.name, {});

…and deploy

npm run build && npm run cdk:deploy -- --all

Uploading a Docker Image

Now, if you go to ECR in the AWS Console and check your repository, you should have a View push commands button in the top right corner. Go to your application directory where you have a Dockerfile – or use nest app from the repo – and follow these instructions with one caveat. We will be deploying ECS Fargate with the ARM64 architecture as it is cheaper, so unless you have a computer with a ARM64 processor e.g., Apple Silicone, you will have to change the second step to:

docker buildx build --platform="linux/arm64" -t <image-tag> .

When successful you should see your docker image in ECR as so:

VPC Stack

Before getting started on the ECS, first we need to create a VPC to put it in. We could – of course – use the default VPC for this as well.

export class VpcStack extends Stack {
  public readonly vpc: IVpc;
  constructor(scope: Construct, id: string, props: StackProps) {
    super(scope, id, props);
    this.vpc = new Vpc(this, 'exanubes-vpc', {
      cidr: '10.100.0.0/16',
      maxAzs: 2,
      subnetConfiguration: [
        {
          name: 'public-1',
          cidrMask: 24,
          subnetType: SubnetType.PUBLIC,
        },
        {
          name: 'private-1',
          cidrMask: 24,
          subnetType: SubnetType.PRIVATE_WITH_NAT,
        },
      ],
    });
  }
}

We’re defining here a 10.100.0.0 with the size 16 CIDR block – this basically means that the first 16 bits are locked and the other 16 bits are available for us to use, so essentially we have 255*255 IP addresses available to us. Selecting only two azs to have high availability – one would also be fine for this – and defining only one public and private subnet for each AZ – so we will have 4 in total.

Worth noticing is that we assigned the vpc to a property on the class instance, that’s because we will want to pass that vpc to our ECS Cluster later.

ECS Stack

First off, we gotta create a cluster, which is just a logical grouping of tasks and services. All the tasks and services are running on an infrastructure registered to a cluster. When separating resources, we’d want to deploy multiple clusters.

interface Props extends StackProps {
  vpc: IVpc;
  repository: IRepository;
}

const CONTAINER_PORT = 8081;

export class ElasticContainerStack extends Stack {
  constructor(scope: Construct, id: string, private readonly props: Props) {
    super(scope, id, props);
    const cluster = new Cluster(this, 'exanubes-cluster', {
      vpc: props.vpc,
      clusterName: 'exanubes-cluster',
      containerInsights: true,
    });
  }
}

Here, we’re creating a cluster and putting it inside the VPC we created earlier. Container Insights are CloudWatch performance logs that will be collected which we can later aggregate and analyze to see how our containers perform.

const albSg = new SecurityGroup(this, 'security-group-load-balancer', {
  vpc: props.vpc,
  allowAllOutbound: true,
});

const loadBalancer = new ApplicationLoadBalancer(this, 'exanubes-alb', {
  vpc: props.vpc,
  loadBalancerName: 'exanubes-ecs-alb',
  internetFacing: true,
  idleTimeout: Duration.minutes(10),
  securityGroup: albSg,
  http2Enabled: false,
  deletionProtection: false,
});

Next, we need an Application Load Balancer(ALB) and a security group(SG) for it. CDK would create a default SG but we will then need to allow traffic from it to the ECS Service, so I prefer to create it explicitly. ALB will also be placed inside our VPC and most importantly we need to have it open to the internet – internet facing. Deletion Protection ensures that no one removes a load balancer by accident as it first needs to be disabled before being deleting when protection’s enabled.

const httpListener = loadBalancer.addListener('http listener', {
  port: 80,
  open: true,
});

const targetGroup = httpListener.addTargets('tcp-listener-target', {
  targetGroupName: 'tcp-target-ecs-service',
  protocol: ApplicationProtocol.HTTP,
  protocolVersion: ApplicationProtocolVersion.HTTP1,
});

When users visit our domain they will actually hit the load balancer which then needs to know how to reach our application. We achieve this with listeners and target groups. A listener tells the load balancer on what port it should expect traffic, then it has three options - forward, redirect or custom response. We can also define rules and depending on headers, request methods, path or query strings we can tell the Load Balancer to route to a different target group. Whereas target group routes requests to registered targets – which in our case will be the ECS Fargate Task.

const taskDefinition = new FargateTaskDefinition(
  this,
  'fargate-task-definition',
  {
    runtimePlatform: {
      cpuArchitecture: CpuArchitecture.ARM64,
      operatingSystemFamily: OperatingSystemFamily.LINUX,
    },
  }
);
const container = taskDefinition.addContainer('web-server', {
  image: EcrImage.fromEcrRepository(props.repository),
});
container.addPortMappings({
  containerPort: CONTAINER_PORT,
});

Now it’s finally time to create our Task Definition. This is similar to a Docker Image, which is an instruction manual on how to build and run the application. Task definition is a manual on how to perform a task, which and how many containers to use, how much CPU and memory etc. A task definition is also where we define what architecture – or runtime platform – we’d like to use. Because it’s significantly cheaper, we opt for Linux OS on ARM64 machine.

When creating a container, we also have to tell ECS what port it’s going to run on – this should be the same port we expose in the Dockerfile.

const securityGroup = new SecurityGroup(this, 'http-sg', {
  vpc: props.vpc,
});
securityGroup.addIngressRule(
  Peer.securityGroupId(albSg.securityGroupId),
  Port.tcp(CONTAINER_PORT),
  'Allow inbound connections from ALB'
);
const fargateService = new FargateService(this, 'fargate-service', {
  cluster,
  assignPublicIp: false,
  taskDefinition,
  securityGroups: [securityGroup],
  desiredCount: 1,
});

Nearing the end, we can finally create an ECS Service. The difference between tasks and services are that a task does a finite job and then exits, whereas service is more for infinite tasks like a web server for example. We need to define which cluster the service belongs to and what task definition it’s supposed to use to spin-up tasks. We can also opt-out of assigning public IP addresses as we won’t be needing those. Desired count is the minimum amount of tasks that the service will attempt to keep running at all times. We choose 1 to keep the costs low, make sure you don’t put some ridiculously large number here as it could cost you a lot of money.

targetGroup.addTarget(fargateService);

Last but not least, a very important step. We’re adding the ECS Service as target to the target group. Thanks to this, target group will automatically register all new Tasks and make them available to the load balancer and thus, the end user.

All of ElasticContainerStack

interface Props extends StackProps {
    vpc: IVpc
    repository: IRepository
}

const CONTAINER_PORT = 8081

export class ElasticContainerStack extends Stack {

    constructor(scope: Construct, id: string, private readonly props: Props) {
        super(scope, id, props);
        const cluster = new Cluster(this, 'exanubes-cluster', {
            vpc: props.vpc,
            clusterName: 'exanubes-cluster',
            containerInsights: true,
            enableFargateCapacityProviders: true,
        })
        const albSg = new SecurityGroup(this, 'security-group-load-balancer', {
            vpc: props.vpc,
            allowAllOutbound: true
        })

        const loadBalancer = new ApplicationLoadBalancer(this, 'exanubes-alb', {
            vpc: props.vpc,
            loadBalancerName: 'exanubes-ecs-application-LB',
            internetFacing: true,
            idleTimeout: Duration.minutes(10),
            securityGroup: albSg,
            http2Enabled: false,
            deletionProtection: false
        })

    const httpListener = loadBalancer.addListener("http listener", {
      port: 80,
      open: true,
    })

    const targetGroup = httpListener.addTargets("tcp-listener-target", {
      targetGroupName: "tcp-target-ecs-service",
      protocol: ApplicationProtocol.HTTP,
      protocolVersion: ApplicationProtocolVersion.HTTP1,
    })

        const taskDefinition = new FargateTaskDefinition(this, 'fargate-task-definition', {
            runtimePlatform: {
                cpuArchitecture: CpuArchitecture.ARM64,
                operatingSystemFamily: OperatingSystemFamily.LINUX
            }
        });
        const container = taskDefinition.addContainer('web-server', {
            image: EcrImage.fromEcrRepository(props.repository),
        })
        container.addPortMappings({
            containerPort: CONTAINER_PORT
        })
        const securityGroup = new SecurityGroup(this, 'http-sg', {
            vpc: props.vpc,
        })
        securityGroup.addIngressRule(Peer.securityGroupId(albSg.securityGroupId), CONTAINER_PORT, 'Allow inbound connections from ALB')
        const fargateService = new FargateService(this, 'fargate-service', {
            cluster: cluster,
            assignPublicIp: true,
            taskDefinition,
            securityGroups: [securityGroup],
            desiredCount: 1
        })
        targetGroup.addTarget(fargateService)
    }

Accessing Load Balancer DNS

Now that everything is setup you can go to AWS Console > EC2 > Load Balancers and copy/paste the LB URL, however, for some reason this does not work right now. Could not figure why though so I have a stack overflow question open. This seems to be a common problem from what I could see when researching this and it’s always the same – problem with port 80.

Anyway, we can still checkout our app by changing our Load Balancer listener for the container port and also allowing traffic for that port in the Load Balancer Security Group.

albSg.addIngressRule(Peer.anyIpv4(), Port.tcp(CONTAINER_PORT));

const httpListener = loadBalancer.addListener('http listener', {
  port: CONTAINER_PORT,
  open: true,
});

After changing this in our infrastructure, we should be able to go to load-balancer-dns-example-url.com:YOUR_CONTAINER_PORT.

Deployment

Now it’s time to create our stacks and deploy

const app = new cdk.App();
const ecr = new EcrStack(app, EcrStack.name, {});
const vpc = new VpcStack(app, VpcStack.name, {});
new ElasticContainerStack(app, ElasticContainerStack.name, {
  vpc: vpc.vpc,
  repository: ecr.repository,
});

Deploy:

npm run build && npm run cdk:deploy -- --all

And don’t forget to tear it down when you’re done:

npm run cdk:destroy -- --all

Summary

In this article we’ve gone over the most important components of an ECS Fargate application. We created a cluster – which is a logical grouping of tasks and services – inside our custom VPC. Provided instructions on how our app should function in the Task Definition and instructed how many Tasks we want when creating the Fargate Service. We were also able to connect to the app by using a load balancer which thanks to its listeners is able to route traffic via a Target Group which automatically registers every new Task in our Service.

Worth noting is that aws-cdk team have prepared for us an ApplicationLoadBalancedFargateService Construct that will set it all up for us as well.

In the next article, we will go over how to setup SSL certificate for our application with a custom domain.