In this article we will learn how to create aws-cdk infrastructure for deploying a React application from Github using a CI/CD pipeline. The idea behind this infrastructure is to build an automated pipeline which will download the code from github whenever someone pushes new code to it. Then we will need to install all dependencies and build the app after which we want to send all those production files to s3 and serve it with CloudFront.
To follow along, you will need an SSL Certificate in us-east-1
region, a github personal access token which will be stored in AWS Secrets Manager. It would also be good to have a Route 53 hosted zone, but it’s not required. If you have a domain name outside of AWS and would like to use it for this, you might wanna see how to share one domain across aws accounts .
Introduction
For the purposes of this article, I have created a sample project in gatsby.js and I will be using aws-cdk v2 to create the required infrastructure. You can find the finished code on Github .
S3 Bucket
First and foremost, we need an S3 Bucket that will hold the static files for our website.
// lib/distribution.stack.ts
export class DistributionStack extends Stack {
public readonly bucket: IBucket;
constructor(app: Construct, id: string, props: StackProps) {
super(app, id, props);
this.bucket = new Bucket(this, 'deploy-bucket', {
blockPublicAccess: {
blockPublicAcls: false,
blockPublicPolicy: false,
restrictPublicBuckets: false,
ignorePublicAcls: false,
},
publicReadAccess: true,
bucketName: 'static-hosting-bucket',
removalPolicy: RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'index.html',
});
}
}
A couple of things to note:
- blockPublicAccess, every bucket is private by default so we need to turn off all ACLs and policies before we can allow public read access
- publicReadAccess has to be set to true because we want to host the website from the bucket thus everyone needs to be able to access the files. With that being said, do not store any sensitive data there. By default, all buckets are completely locked down.
- bucketName, this one is optional, but keep in mind bucket names are global per region because they each have their own url. Make sure it’s something unique or it will fail during deployment.
- removalPolicy, we want to destroy the bucket when tearing down the infrastracture
- autoDeleteObjects, makes removalPolicy possible. This will actually create a lambda function that will remove all objects from your s3 bucket so that cloudformation can delete the bucket. Only empty buckets can be deleted.
- websiteIndexDocument is crucial. This not only points to the entry file for the application but also configures the bucket for static hosting
So, this bucket is ready for static hosting and now we can use it as target for our build output in the pipeline.
Pipeline
At this point, we’re already gonna need a Personal Access Token from Github which you can generate in the developer settings . Then go to AWS Secrets Manager for your region and create a plaintext secret.
This is necessary, because we need to give AWS Code Pipeline access to our github account so that it can download the source code from private repositories.
Getting source code into Code Pipeline
// lib/pipeline.stack.ts
const SECRET_ARN = 'arn:aws:secretsmanager:us-east-1:12345678901234:secret:github/secret-7ncpR7'
export class PipelineStack extends Stack {
constructor(scope: Construct, id: string, props: StackProps) {
super(scope, id, props);
new GitHubSourceCredentials(this, 'code-build-credentials', {
accessToken: SecretValue.secretsManager('github/cdk-pipeline')
})
const source = Source.gitHub({
owner: 'exanubes',
repo: 'aws-pipeline',
webhook: true,
webhookFilters: [
FilterGroup.inEventOf(EventAction.PUSH).andBranchIs('master')
]
})
Here, we are setting the credentials that will be used by Code Build to communicate with the GitHub API. Next, define the source by pointing at your account and the repo it’s supposed to use. Setting up a webhook is necessary to trigger the pipeline automatically when there are new commits on branch of your choice.
Building the application
Now that our pipeline has access to the code we can utilise AWS Code Build to actually build the application.
// lib/pipeline.stack.ts
constructor(){
// ...
const buildSpec = this.getBuildSpec();
const project = new Project(this, 'project', {
projectName: 'pipeline-project',
source,
environment: {
buildImage: LinuxBuildImage.STANDARD_5_0,
privileged: true,
},
buildSpec
})
}
private getBuildSpec() {
return BuildSpec.fromObject({
version: '0.2',
env: {
shell: 'bash'
},
phases: {
pre_build: {
commands: [
'echo Build started on `date`',
'aws --version',
'node --version',
'npm install',
],
},
build: {
commands: [
'npm run build',
],
},
post_build: {
commands: [
'echo Build completed on `date`',
]
}
},
artifacts: {
['base-directory']: 'public',
files: ['**/*']
},
cache: {
paths: ['node_modules/**/*']
}
})
}
Projects are what Code Build uses to, well, build projects. So we had to create one, pass it a source – which is our github repo and we also had to pass it a build spec.
Build spec are instructions for the service on how to handle the build process. Phases are self explanatory, however, version – at least at the time of writing – has to be set to 0.2
similar to how in cloudformation we need to provide a version.
This is not the version of your build, it’s the version of the buildSpec API we’re using.
In artifacts we want to select what will be the output of the build step in our pipeline. Base Directory maps the “current location” to public
and then in files we specify that we want to take everything within the public folder with the **/*
glob pattern.
Permissions
Before we go and build the pipeline we need to sort out permissions first
// lib/pipeline.stack.ts
interface Props extends StackProps {
bucket: IBucket;
}
constructor(scope: Construct, id: string, props: Props) {
// ...
project.addToRolePolicy(new PolicyStatement({
actions: ["secretsmanager:GetSecretValue"],
resources: [SECRET_ARN]
}))
props.bucket.grantReadWrite(project.grantPrincipal)
}
So first, we need to give the project permission to actually access the secret with Github access token and we also need it to have permission to read and write data from the S3 bucket.
Putting it together into a pipeline
// lib/pipeline.stack.ts
const artifacts = {
source: new Artifact('Source'),
build: new Artifact('BuildOutput'),
};
First we create artifacts that will be the results of each step. We have a source step which downloads the files from github – a Source Artifact – and a build step which creates the production ready application – a Build Artifact.
// lib/pipeline.stack.ts
const pipelineActions = {
source: new GitHubSourceAction({
actionName: 'Github',
owner: 'exanubes',
repo: 'aws-pipeline',
branch: 'master',
oauthToken: SecretValue.secretsManager('github/cdk-pipeline'),
output: artifacts.source,
trigger: GitHubTrigger.WEBHOOK,
}),
build: new CodeBuildAction({
actionName: 'CodeBuild',
project,
input: artifacts.source,
outputs: [artifacts.build],
}),
deploy: new S3DeployAction({
actionName: 'S3Deploy',
bucket: props.bucket,
input: artifacts.build,
}),
};
const pipeline = new Pipeline(this, 'DeployPipeline', {
pipelineName: `s3-pipeline`,
stages: [
{ stageName: 'Source', actions: [pipelineActions.source] },
{ stageName: 'Build', actions: [pipelineActions.build] },
{ stageName: 'Deploy', actions: [pipelineActions.deploy] },
],
});
Here we actually use the setup prepared earlier and create Pipeline Actions. First the github action will download new source code and put it in the Source Artifact which is then picked up as input by the build action. The result of the build stage is put in the Build Artifact and deploy stage takes that as input and sends it over to the S3 bucket we created earlier.
Recap
Thus far we have created an S3 bucket that is configured for public access and static hosting. Then we passed that bucket into the Pipeline stack. Next, we needed to setup a source and configure a webhook for triggering the pipeline automatically with new commits. That github source is then used in the Code Build project which builds the app based on the build spec we provided it with and outputs chosen files as artifact. Last but not least, we had to define the pipeline actions and use those actions to create the pipeline itself.
By this point this infrastructure is fully functional and the application can be seen by visiting the bucket’s url. In the next few steps we will setup a cloudfront distribution with https and a custom domain.
Distribution
Edge Function
First of all, we’re going to need a Lambda@Edge function as a request url proxy. I won’t be going into reasons why in this article, but you can read about fixing Cloudfront 404 errors when visiting URLs .
// lib/distribution.stack.ts
const edgeLambda = new cloudfront.experimental.EdgeFunction(
this,
'request-url-proxy',
{
runtime: Runtime.NODEJS_14_X,
handler: 'edge-lambda.main',
code: Code.fromAsset(path.join(__dirname, '../src/', `edge-lambda`)),
memorySize: 128,
logRetention: RetentionDays.ONE_DAY,
tracing: Tracing.DISABLED, // x-ray tracing
currentVersionOptions: {
removalPolicy: RemovalPolicy.DESTROY,
},
}
);
Not much to say about this pretty standard lambda setup where we define the runtime, memory and point to the file. The only difference is that it will be executed at edge locations where it will map URLs to .html
files.
AWS requires that stacks which use Edge Functions also have an explicitly set region. We can either set it with a hard-coded string e.g. ‘na-north-1’ or we could use AWS SDK.
// utils/logger.ts
const { ConfigServiceClient } = require('@aws-sdk/client-config-service');
const client = new ConfigServiceClient({});
export async function getRegion() {
return client.config.region();
}
// bin/aws-code-pipeline-with-s3-deployment.ts
async function main() {
const app = new cdk.App();
const region = await getRegion();
const distribution = new DistributionStack(app, DistributionStack.name, {
env: { region },
});
new PipelineStack(app, PipelineStack.name, {
bucket: distribution.bucket,
env: { region },
});
}
Cloudfront Distribution
// lib/distribution.stack.ts
const certificate = Certificate.fromCertificateArn(
this,
'cloudfront-ssl-certificate',
certificateArn
);
const distribution = new Distribution(this, 'distribution', {
defaultBehavior: {
origin: new S3Origin(this.bucket),
edgeLambdas: [
{
functionVersion: edgeLambda.currentVersion,
eventType: LambdaEdgeEventType.VIEWER_REQUEST,
},
],
viewerProtocolPolicy: ViewerProtocolPolicy.REDIRECT_TO_HTTPS,
},
certificate,
defaultRootObject: 'index.html',
domainNames: ['www.dev.exanubes.com', 'dev.exanubes.com'],
});
At this point we need to create the cloudfront distribution and point it to where our files are – S3 Bucket.
We also have to configure it as a trigger for our Edge Function at every Viewer Request
– read about assigning lambdas to cloudfront distributions to learn more.
Also, CloudFront supports https protocol, so we can provide it an SSL certificate for the domains we want to use. It will work regardless but only through http
.
Default root object is the entry file for the application and domain names are access points – keep in mind you will need an Alias/CNAME Records for each of them.
Creating DNS Records
Now, as mentioned above we need to create records for the domains and we’re going to need as Hosted Zone for that. If you don’t have a aws hosted zone for your domain, you can read about creating one here .
// lib/distribution.stack.ts
const hostedZone = HostedZone.fromHostedZoneAttributes(this, 'hosted-zone', {
hostedZoneId: 'Z09747622HB25HPBEB7U5',
zoneName: 'dev.exanubes.com',
});
So, dev.exanubes.com
is the hosted zone of my choice. I will be using its apex domain as well as the www
subdomain addresses for the application so, I need two DNS Records – one for each address.
// lib/distribution.stack.ts
new CnameRecord(this, 'www.cname', {
zone: hostedZone,
recordName: 'www',
domainName: distribution.distributionDomainName,
});
new ARecord(this, 'apex.alias', {
zone: hostedZone,
target: RecordTarget.fromAlias(new targets.CloudFrontTarget(distribution)),
});
When creating any record, we have to provide the hostedZone. In the CName we also provide the subdomain – www – and point to our CloudFront Distribution. However, CName Records cannot be assigned to apex domains, and A Records are used for translating URLs into IP addresses. This leaves us with an alias record – which is a special type of A Record – that allows targeting any AWS Resource.
Summary
In this article we have covered how to create an S3 Bucket configured for static hosting and serve that application via CloudFront with a custom domain and SSL Certificate. We were also able to automate the deployment process by creating a webhook for a github repository and providing access via a Personal Access Token. Then, we defined each step of the pipeline and designed the flow where each subsequent stage relied on Artifacts created in the previous one. In the end we finished with a completely automated workflow for our application. Nice.