Application on ECS #4Connecting to RDS via Parameter Store config

This article is part of a series:

  1. Deploying ECS Fargate Application
  2. Adding SSL Certificate to Fargate app
  3. CI/CD pipeline for ECS application
  4. Connecting to RDS via Parameter Store config

In this article we will go over creating and connecting to a database from an application deployed to ECS Fargate Containers. First we will need to create an RDS instance and store the database credentials, then we will update the CI/CD pipeline in order to perform database migration whenever new versions of the application are deployed. Last but not least, we will use the AWS SDK to communicate with Systems Manager to recover database credentials and connect to it.

Here's a diagram of what we're building

diagram of ecs application with rds instance and ci/cd pipeline

We're going to pick up where we left off in ECS with CI/CD Pipeline with a Fargate cluster behind an ALB and a CI/CD pipeline for automatic deployments. In this article we will focus on implementing the second private subnet with an RDS instance that has a security group configured to allow inbound traffic on port 5432 from within the VPC. This way, CodeBuild will be able to run migration queries and Fargate Container - our App - will be able to communicate with the database. In order for the application to gain access to the database, we will grab credentials from the SSM Parameter Store.

Go to GitHub if you're looking for finished code. To follow along clone the starter repo

Double click to copy
git clone -b start git@github.com:exanubes/connecting-to-rds-via-ssm-parameter-store-config.git

RDS Instance #

The overall creation of RDS Instance is quite straightforward. What we want to keep in mind while doing this is to place it within our own VPC.

Double click to copy
1// stacks/rds.stack.ts
2interface Props extends StackProps {
3 vpc: IVpc;
4 dbConfig: Pick<DatabaseConfig, 'database' | 'username' | 'password'>;
5 securityGroup: SecurityGroup;
8export class RdsStack extends Stack {
9 public readonly db: DatabaseInstance;
11 constructor(scope: Construct, id: string, props: Props) {
12 super(scope, id, props);
14 this.db = new DatabaseInstance(this, 'exanubes-database', {
15 engine: DatabaseInstanceEngine.POSTGRES,
16 vpc: props.vpc,
17 credentials: {
18 username: props.dbConfig.username,
19 password: SecretValue.plainText(props.dbConfig.password),
20 },
21 databaseName: props.dbConfig.database,
22 storageType: StorageType.STANDARD,
23 instanceType: InstanceType.of(
24 InstanceClass.BURSTABLE3,
25 InstanceSize.SMALL
26 ),
27 securityGroups: [props.securityGroup],
28 parameterGroup: ParameterGroup.fromParameterGroupName(
29 this,
30 'postgres-instance-group',
31 'postgresql13'
32 ),
33 });
34 }

Here we're creating a new database instance and assigning it to a property on the RdsStack. This is also the place where we define our vpc as the location of our database. AWS will put it in a private subnet by default, however, if you wish to have more control over it, you can use subnetGroup and vpcSubnets properties.

Moving on, we define credentials for the database and the database name as well as the type and size of the instance. We're using the cheapest options with standard storage type. Last but not least we're pointing to a postgres version inside RDS parameter groups that we want to use for our database.

Should you have an error related to invalid/non-existent parameterGroup, you will have to go to RDS>Parameter Groups and create it yourself

CI/CD Pipeline #

With RDS Instance created, we still need a way to synchronize the database, create tables, add columns etc. and obviously we want to automate it. To accomplish that, we will be adding an additional step to the existing CodePipeline.

Double click to copy
1// stacks/pipeline.stack.ts
3private getMigrationSpec() {
4 return BuildSpec.fromObject({
5 version: "0.2",
6 env: {
7 shell: "bash",
8 },
9 phases: {
10 install: {
11 commands: ["(cd ./backend && npm install)"],
12 },
13 build: {
14 commands: [
15 "./backend/node_modules/.bin/sequelize db:migrate --debug --migrations-path ./backend/db/migrations --url postgresql://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:5432/${DB_NAME}",
16 ],
17 },
18 },
19 });
20 }

First off we need a buildspec that will install our dependencies. Then we will use Sequelize CLI to run a migration command against our database.

Double click to copy
1// stacks/pipeline.stack.ts
2const migrationProject = new Project(this, 'migration-project', {
3 projectName: 'migration-project',
4 securityGroups: [props.securityGroup],
5 vpc: props.vpc,
6 buildSpec: this.getMigrationSpec(),
7 source,
8 environment: {
9 buildImage: LinuxBuildImage.AMAZON_LINUX_2_ARM_2,
10 privileged: true,
11 },
12 environmentVariables: {
13 DB_USER: {
14 value: props.dbConfig.username,
15 },
17 value: props.dbConfig.password,
18 },
19 DB_HOST: {
20 value: props.dbConfig.hostname,
21 },
22 DB_PORT: {
23 value: props.dbConfig.port,
24 },
25 DB_NAME: {
26 value: props.dbConfig.database,
27 }
28 },

Now, defining a separate Project for the migration step. Most of the configuration is the same as in ECS with CI/CD what changed is the fact that we actually need to assign it to our own VPC to be able to reach the RDS Instance. We're also adding a security group to be able to communicate with the database. More on that later. This is also the place where we can pass all the relevant environmental variables i.e. database credentials.

Double click to copy
1// stacks/pipeline.stack.ts
2 const pipelineActions = {
3 //...
4 migrate: new CodeBuildAction({
5 actionName: 'dbMigrate',
6 project: migrationProject,
7 input: artifacts.source,
8 }),
9 };
11 const pipeline = new Pipeline(this, 'DeployPipeline', {
12 pipelineName: `exanubes-pipeline`,
13 stages: [
14 { stageName: 'Source', actions: [pipelineActions.source] },
15 { stageName: 'Build', actions: [pipelineActions.build] },
16 { stageName: 'Migrate', actions: [pipelineActions.migrate] },
17 {
18 stageName: 'Deploy',
19 actions: [pipelineActions.deploy],
20 },
21 ],
22 });

To finish it off we define a CodeBuildAction using the new project and the source Artifact and then finally add the Migrate stage to the pipeline.

Update build project #

Now, because app's Dockerfile slightly changed, we have to update the build project and spec.

First we add AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY environment variables to the buildProject.

Double click to copy
1// stacks/pipeline.stack.ts
3 value: AWS_ACCESS_KEY,
4 },
7 }

And now we have to update the build property of the buildSpec to account for the new environment variables

Double click to copy
1// stacks/pipeline.stack.ts
2 build: {
3 commands: [
4 "echo Build started on `date`",
5 "echo Build Docker image",
6 "docker build -f ${CODEBUILD_SRC_DIR}/backend/Dockerfile --build-arg region=${AWS_STACK_REGION} --build-arg clientId=${AWS_ACCESS_KEY} --build-arg clientSecret=${AWS_SECRET_ACCESS_KEY} -t ${REPOSITORY_URI}:latest ./backend",
7 'echo Running "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}"',
8 "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}",
9 ],
10 }

The only real difference here is that we provide --build-args to the build command to set the relevant environment variables that are required to establish a connection with aws.

SSM Parameter Store #

In order to be able to connect with a database we need to know the location and credentials. I have opted for Parameter Store as I find it a very convenient way to organize environment variables.

Double click to copy
1// stacks/parameter-store.stack.ts
2interface Props extends StackProps {
3 dbConfig: DatabaseConfig;
6export class ParameterStoreStack extends Stack {
7 constructor(scope: Construct, id: string, props: Props) {
8 super(scope, id, props);
9 new SecureStringParameter(this, 'database-password', {
10 parameterName: '/production/database/password',
11 stringValue: props.dbConfig.password,
12 tier: ParameterTier.STANDARD,
13 dataType: ParameterDataType.TEXT,
14 });
15 new StringParameter(this, 'database-user', {
16 parameterName: '/production/database/username',
17 stringValue: props.dbConfig.username,
18 tier: ParameterTier.STANDARD,
19 dataType: ParameterDataType.TEXT,
20 });
21 new StringParameter(this, 'database-hostname', {
22 parameterName: '/production/database/hostname',
23 stringValue: props.dbConfig.hostname,
24 tier: ParameterTier.STANDARD,
25 dataType: ParameterDataType.TEXT,
26 });
27 new StringParameter(this, 'database-port', {
28 parameterName: '/production/database/port',
29 stringValue: String(props.dbConfig.port),
30 tier: ParameterTier.STANDARD,
31 dataType: ParameterDataType.TEXT,
32 });
33 new StringParameter(this, 'database-socket-address', {
34 parameterName: '/production/database/socketAddress',
35 stringValue: props.dbConfig.socketAddress,
36 tier: ParameterTier.STANDARD,
37 dataType: ParameterDataType.TEXT,
38 });
39 new StringParameter(this, 'database-database', {
40 parameterName: '/production/database/name',
41 stringValue: props.dbConfig.database,
42 tier: ParameterTier.STANDARD,
43 dataType: ParameterDataType.TEXT,
44 });
45 }

All we do here is create string parameters for database credentials and location. Password is created as a secure string, meaning it is encrypted using AWS KMS.

SecureStringParameter is imported from @exanubes/aws-cdk-ssm-secure-string-parameter because of limitations of AWS CloudFormation. This construct utilises a Lambda and AWS SDK to create the secure string parameter.

Access and firewall management #

All the resources are in place, we have a RDS instance, a migration stage in CI/CD pipeline and parameters for the application. However, we still need to handle access permissions to the database for both the pipeline stage as well as the fargate service. We also need to configure the firewall to allow traffic from those origins. This can be managed with security groups.

Double click to copy
1// stacks/security-group.stack.ts
3interface Props extends StackProps {
4 vpc: IVpc;
7export class SecurityGroupStack extends Stack {
8 databaseSg: SecurityGroup;
9 databaseAccessSg: SecurityGroup;
11 constructor(scope: Construct, id: string, props: Props) {
12 super(scope, id, props);
13 this.databaseAccessSg = new SecurityGroup(this, 'database-access-sg', {
14 vpc: props.vpc,
15 description:
16 'Security group for resources that need access to rds database instance',
17 });
19 this.databaseSg = new SecurityGroup(this, 'rds-allow-postgres-traffic', {
20 vpc: props.vpc,
21 description: 'Security group for rds database instance',
22 });
23 this.databaseSg.addIngressRule(
24 this.databaseAccessSg,
25 Port.tcp(5432),
26 `Allow inbound connection on port 5432 for resources with security group: "${this.databaseAccessSg.securityGroupId}"`
27 );
28 }

Here we're creating two security groups in our vpc. The databaseSg is for opening up the 5432 port on database instance and I'm using databaseAccessSg as source. This way, every resource that has databaseAccessSg assigned will have access to the database and if I want to revoke access, I can just remove the security group from that service.

This is not all though. We still need to grant connection permissions to the instances.

Double click to copy
1// stacks/pipeline.stack.ts
Double click to copy
1// stacks/elastic-container.stack.ts
2const taskRole = new Role(this, 'exanubes-fargate-application-role', {
3 assumedBy: new ServicePrincipal('ecs-tasks.amazonaws.com'),
6const taskDefinition = new FargateTaskDefinition(
7 this,
8 'fargate-task-definition',
9 {
10 runtimePlatform: {
11 cpuArchitecture: CpuArchitecture.ARM64,
12 operatingSystemFamily: OperatingSystemFamily.LINUX,
13 },
14 taskRole,
15 }
Double click to copy
1// stacks/elastic-container.stack.ts

First we go into pipeline stack and grant connect permission to our migration project principal. Then we have to do the same inside the elastic stack. To do this we create a task role for the ecs-tasks principal which basically defines which entity - user, app, organization, service etc. - can perform actions with this role. Then we grant connect permissions to this role and use it in Fargate Task Definition construct. Lastly, we also have to allow Fargate Service to establish a connection with RDS Instance.

You can find a list of service principals in this gist

Due to circular dependency error when using a Security Group, we have to add the rds connection manually via the .allowToDefaultPort() method

Connecting to RDS #

Now that everything is setup, we can use AWS SDK to load the database credentials.

Double click to copy
1// backend/src/config/config.provider.ts
3async () => {
4 const isProd = process.env.NODE_ENV === 'production';
5 if (!isProd) {
6 return config;
7 }
8 const client = new SSMClient({
9 region: String(process.env.region),
10 credentials: {
11 accessKeyId: String(process.env.clientId),
12 secretAccessKey: String(process.env.clientSecret),
13 },
14 });
15 const command = new GetParametersByPathCommand({
16 Path: '/production',
17 Recursive: true,
18 WithDecryption: true,
19 });
20 const result = await client.send(command);
21 return transformParametersIntoConfig(result.Parameters || []);

Here, we setup the client with credentials passed in to the Dockerfile in the Build Stage of our CI/CD pipeline. Then we can just load all the parameters prefixed with /production and transform it into a simpler to use data structure. I used the WithDecryption option in order to get a decrypted value of the database password parameter.

Deployment #

While deployment is in progress, remember to push the initial image to ECR, otherwise the deployment will hang on ElasticContainerStack. Once all the stacks have been deployed, we're going to need to trigger the pipeline which will trigger the migration after which we should be able to see the data on /users endpoint. That tells us we're connected!

Double click to copy
npm run build && npm run cdk:deploy -- --all
Before deploying make sure that all the secrets and ARNs are your own. Double check the src/config.ts and .env files.

Don't forget to tear down the infrastructure to avoid unnecessary costs

Double click to copy
npm run cdk:destroy -- --all

Summary #

In this article we have gone through setting up a Database Instance, configuring the user, name, size and engine. Then, we have used it for the CI/CD pipeline in order to run a migration script as part of the automated deployment strategy. To be able to connect to the database from our application, we have saved all relevant database information in SSM Parameter Store and used the AWS SDK in the app to load the config. Lastly, we have opened up the 5432 port on the RDS Instance to our migration stage and ECS Service and granted connection permissions to them, following the principle of least privilege.


A verification email has been sent to

Keep in Touch

Join other developers in Exanubes Newsletter

© 2023