Static Website in AWS with Jekyll and LetsEncrypt

Something I’d been really keen to try for myself for ages was using AWS and S3 to host a static page. I learnt about in the AWS Sys Ops course but like everything else, its quite nice to play with it for yourself in a quasi-production kind of way. This guide by Jennifer Wilson from Northwestern University covers the vast majority of what you need to do and is pretty thorough.

Here are the elements of the project:

Jekyll

I stumbled across a tweet about Jekyll last week and when I looked it up, I immediately loved the simple look and feel as well as the fact that it could be easily bundled into a static site. Clearly I am late to the party on this one as there is a very big following and active community around Jekyll. Using the ruby bundler you can also run a local webserver to preview you your work as you go, making for a smooth workflow. For instructions on getting up and running see jekyllrb.com.

For a quick reference on the syntax for the markdown language, see Syntax Documenation over at Daring Fireball.

To install jekyll on MacOS:

$ sudo gem install jekyll bundler #Use HomeBrew to update Ruby if it complains about the version
$ jekyll new mywebsite
$ cd mywebsite
$ bundle exec jekyll serve

Using your web browser you can now view the default jekyll template from http://127.0.0.1:4000/. To exit, use Crtl+C.

To create a new post, enter the folder “_posts” and create a new “.markdown” file with the following header format:

---
layout: post
title:  Static Website in AWS with Jekyll and LetsEncrypt
date:   2017-07-10 10:00:43 +0800
categories: developement aws web
---
Here is my first post!

When you are ready to build a static copy of your side to upload, simply change to the root directory of your site and run:

$ jekyll build --source . --destination ../mynewsite-static/

LetsEncrypt

SSL all the things. This is a little bit more curly to document as it is different for every circumstance. Personally, I manage my SSL Certificates using the dehydrated package on FreeBSD. It needs to interact with your front end webserver when on Certificate creation and renewal time comes to validate the private key and domain ownership. This is a little tricky when using S3 as I don’t quite know how you would serve the LetsEncrypt “/.well-known/ directory from Amazon S3.

In my case, I created an additional DNS name to host the certificate creation and renewal web server running on a FreeBSD box at home. Then I created an additional Custom Origin on Cloudfront to redirect traffic for /.well-known/* to the other DNS name where my Apache server is running and hosting the certificate renewal.

The guide linked above demonstrated how to upload your certificate using the AWS CLI. I didn’t have any success with this, not sure why, but managed to successfully upload the cert, private key and chain via the AWS GUI under IAM Certificates.

LetsEncrypt (or perhaps Dehydrated) byt default creates a private key that is 4096 bytes. Amazon restrict the ket size to only 2048 as apparently it has an effect on the SSL session negotion speed and efficiency. This can be adjusted in your Dehydrated config and in my case meant revoking and re-issuing my certificate.

S3 Bucket

AWS CLI

This is a really convenient way to manage both SSL Certificates and uploading / syncing content into your S3 Bucket from your dev machine.

$ sudo pip3 install awscli
$ aws configure

Create a new S3 bucket:

$ aws s3 mb s3://mynewsite-bucket

Sync your site content into the new bucket (from the jekyll build above):

$ aws s3 sync ../mynewsite-static/ s3://mynewsite-bucket/ --storage-class REDUCED_REDUNDANCY --acl public-read 

My plan for major updates to the site, will be create a new S3 bucket for each update and then change the Cloudfront Origin when it is ready to cut over. This also has the advantage that you can view the new content using the direct S3 URL (http://{bucketname}.s3-website-{region name}.amazonaws.com) prior to going live.

Enable Static Content Hosting

For each S3 bucket within which you wish to host static content, you must enable static content hosting. From the S3 control panel, open the bucket and navigate to properties. Nominate a default document and error page and you’re done.

Cloudfront

Configure IAM to restrict direct access to the S3 Bucket

This stops direct public access to your S3 Bucket and allows it only via the Cloudfront distribution. Unfortunately, if like me you’re using a Custom Origin, you cannot do this via IAM. I’m not sure if there are other options.

Use a Cloudfront Custom Origin not an S3-Origin

Something that I discovered in my initial deployment was that using a S3-Origin for my Cloudfront distribution meant that reneding the default document in subfolders of my site would not work and would give the following response:

<Error>
  <Code>NoSuchKey</Code>
  <Message>The specified key does not exist.</Message>
  <Key>subfolder</Key>
  <RequestId>9C47535B63300C86</RequestId>
  <HostId>big-long-sting-of-various-characters</HostId>
</Error>

A little bit of Google bashing led me to the discovery that you need to configure the Cloudfront Distribtion with a Custom Origin pointing to the Public URL of your S3 Bucket. SThe Custom Origin URL will be something like {bucketname}.s3-website-{region name}.amazonaws.com.

Route53

Using an A Record of type alias will allow you to resolve the apex of your zone to a Cloudfront Distribution that you have configured. Otherwise the A Record would have to point to an IP Address as required by the RFC.

@cd


© 2021. All rights reserved.

Powered by Hydejack v9.1.6