At Simiotics, we’ve started using AWS Lightsail instances to deploy prototypes, proofs of concept, and services that don’t need to scale even in production. It’s a convenient way to set up a server and we are happy with the pricing — the smallest servers run at $3.50 a month (not counting network egress).
We host our code on GitHub and use GitHub Actions for our CI/CD needs.
The only real frustration that we’ve had with Lightsail so far is that there isn’t a clear story about how to terminate a CI/CD pipeline into a Lightsail instance.
It looks like AWS actively develops GitHub Actions, so we may have an official mechanism down the road. For now, we hope this quick guide can help anyone else with the same need.
There are other guides online to setting up automatic deployments into a VPS, but they recommend setting up git remote on the VPS with a post-receive hook. That way, when you push your code to the VPS remote, it runs the deployment steps. Here are a couple of guides to follow if you would like to take that approach:
Since we already use GitHub Actions for the rest of our deployments, we wanted to avoid setting up snowflake deployment processes like this one.
Our approach is basically to SSH onto production servers and update them as part of our GitHub Action. There are some perhaps surprising subtleties in this process, and we will discuss them in the course of our checklist:
- Commit an update script to your codebase.
The script should execute all of the operations you perform in the course of a deployment.
Example: Here is our update script for Thumbsup. We have hard-coded paths as a matter of expedience, but those can also be parametrized using environment variables.
- Set up an environment file on your server.
This file provides the server-specific values for the environment variables in your update script.
Example: Our deployment updates systemd services. These services take an
EnvironmentFileparameter and we specify the values to our environment variables in those files.
For a simpler setup, you can just
sourceyour environment file in your update script to have access to those values.
- Generate an SSH key for your deployments
It is very important not to use the keys your VPS provides you when your instance is created. For example, AWS Lightsail uses the same key across all Lightsail instances by default. Using the same key as part of your deployment pipeline is a huge security risk.
You can use
ssh-keygento generate an SSH key. This generates a public and private key pair (most likely in your
~/.sshdirectory – the public key has a
.pubat the end of it).
Add the public key as a new line in the
authorized_keysfile on your server so that you can use the new key for login.
linode has published a great guide to this process: Use Public Key Authentication with SSH
- Encode your private key as a base64 string and register it as a secret on your CI/CD host
The base64 encoding makes it easier to use your private key in shell scripts. If you open up the private key file, you will see that there are line breaks. If you used the raw contents, the line breaks would terminate shell commands.
To encode the file on a Linux or Mac system, run:
$ base64 -w0 <filename>.
If you are using GitHub Actions, you will add this encoded private key as a secret on either your repository or your GitHub organization. Other CI/CD providers have similar mechanisms – e.g. GitLab CI, Travis CI, CircleCI.
- Write a deployment pipeline which executes your update script over SSH
Now that you have completed the other steps, your deployment pipeline has access to an SSH key which allows it to authenticate to your production server.
This makes your deployment configuration simple – it needs do first deliver the latest version of the code to the production server, and then execute the update script with a line like
ssh -i <private key> <user>@<production server> <execute update script>
Example: Deployment configuration for Thumbsup.
This approach scales remarkably well to multiple deployment targets. If you are deploying the same code to multiple servers, we recommend replacing the
ssh invocation with Ansible.