Infrastructure as Code
To host your website you will need a server. Setting up and configuring a server can take quite some time. In future you may also forget how exactly your server was setup originally, so you might struggle to replicate the server setup.
To make your server setup automated and reproducible we will employ a technique known as "Infrastructure as Code". This technique stores your server setup as configuration files and automated scripts in your Git repository.
Hosting Provider
A hosting provider is a third-party that runs your server. For the hosting infrastructure, you have several options.
Self-hosting in your own premises
If you have your own hardware, you can self-host your website in your home or office. The advantage of this is that you have full control of what kind of hardware is running your website and you have physical access to your server(s).
I host the website Instant Workstation in my own home. In the case of Instant Workstation the self-hosting at home is justified since specialized hardware is needed to run the more specialized virtual machines, for example ARM and RISC-V. Additionally self-hosting can be significantly cheaper if your website requires a lot of computing power and RAM to run. Storage space at home is also generally cheaper than in the cloud.
This is what the Instant Workstation server hardware at home looks like:
Self-hosting also comes with several disadvantages. There is an upfront cost for the hardware, whereas server rental in the cloud only requires monthly payments and usually no upfront costs. Furthermore self-hosting at home is a lot less convenient than hosting in the cloud since there is more administration needed.
You may need to get permission from your ISP to host a website in your home. Many ISPs forbid this by default in their home internet contracts. You may also want to ask your ISP for a static IP address. Whilst you can get away with a dynamic IP address if you use Dynamic DNS, a static IP address offers a more stable and reliable experience for your users. Everytime your dynamic IP address changes, your website might appear down for your users since there can be a delay of up to several hours until DNS changes propagate throughout the internet.
Co-location
If you want to control what hardware your website is hosted on but don't want the hassle of having the hardware in your home or office, then co-location could be an option. With co-location, you rent some space in a data center where you can place and run your hardware. Co-location can be quite expensive and is generally not suitable for running a small website.
Shared hosting
Shared hosting is the cheapest way to host a small website. With shared hosting several other website share the same server. You do not get root access to your server, so you cannot make configuration changes or installations that require root access.
Shared hosting may be sufficient for simpler websites, however for full-fledged web-apps shared hosting might not be the ideal choice.
VPS
A VPS (Virtual Private Server) is a virtual machine. Several virtual machines may run on the same physical machine. A VPS gives you full control of your server, including root access. Renting a VPS is generally the cheapest way to host a website that requires root access to the host machine, i.e. where shared hosting is not feasible.
Renting a VPS is the recommended hosting method in this guide. Epic Fantasy Forge is hosted on a VPS.
Dedicated Server
A dedicated server is a physical machine. You naturally get root access to the machine and generally full control except physical access. A dedicated server can come with more computing power than a VPS, however this comes at the cost of convenience. Provisioning a VPS is generally simpler and faster than getting a dedicated server. Backups can also be more tricky with dedicated servers compared to virtual private servers.
Unless your website requires a lot of computing power, it is recommended to get a VPS instead of a dedicated server.
Terraform
The Infrastructure as Code software tool recommended by this guide is Terraform. The tool allows you to write scripts to automatically provision and configure your server.
Installation
To install Terraform, run the below command:
sudo dnf install terraform
Hetzner
To use Terraform, you need somewhere to deploy your server. The hosting provider recommended by this guide is Hetzner. Create an account with Hetzner.
Once you have an account with Hetzner go to your "Cloud" dashboard. You can access it by for example going to the Hetzner main page, opening the "Login" accordion and clicking on "Cloud":
Once in the "Cloud" dashboard click on "New project":
Give your new project a name. Then click on "Add project":
Your new project should now show on the dashboard. Click on your new project:
API Token
An API token is like a combined username and password. It can be used to authenticate to third-party services. Later in this guide we will automate provisioning of your server infrastructure. In order to do this you will need an API token so Terraform can authenticate itself to Hetzner in order to provision and manage your infrastructure.
To generate an API token in Hetzner, click on the "Security" icon on the left sidebar:
Select the "API tokens" tab and click on "Generate API token":
Now give your token a description, for example "CI" since the token will be mainly used by your CI. For permissions, select "Read & Write". Then press "Generate API token":
You should now have an API token. To reveal it, press "Click to show":
Click "Copy" to copy the API token to your clipboard.
Warning
The API token is like a combined username and password to Hetzner. Whoever knows your API token is able to create and delete servers with your Hetzner account. If someone gains access to your Hetzner token and creates servers with your account, then your payment method will be billed.
Keep your API token secret!
Once you close the dialog in the above screenshot you will no longer be able to access the API token value. You should store the API token value in a safe place, such as a password manager. Personally I use the password manager KeePassXC.
In addition to storing a personal copy of your Hetzner API token in your password manager, you should also create another CI variable in GitLab. The CI will later need access to your Hetzner API token once the provisioning of your server has been automated.
Add a variable in GitLab to hold your Hetzner API token. Make sure select the "Masked and hidden" option for the "Visibility" as you don't want the token to be printed in plaintext in CI job logs. For further information on how to set a CI variable in GitLab, see the CI Variables section in this guide.
SSH Keys
In order to be able to log into your Hetzner server without a password, you need to add your public SSH keys to Hetzner to use public key authentication.
To do so, click on the "Security" (key) icon on the left sidebar from your Hetzner Cloud dashboard, select the "SSH keys" tab and click on "Add SSH key":
Copy and paste the contents of your public key, usually named id_rsa.pub located in ~/.ssh/id_rsa.pub into the "SSH key" field. Give the key a name and click "Add SSH key". For more details about SSH keys see the section SSH Key Creation in this guide.
You have now added your development machine's public SSH key. However not only your development machine will want to log into your Hetzner server, but also your CI. Later in this guide when we automate deployments to your server your CI will need to be able to log into your Hetzner server. In theory you could let your CI use the same SSH keypair as your development machine, but it might be better to create separate keys for enhanced security.
To generate an SSH keypair for your CI, follow the instructions in the SSH Key Creation section of this guide. However this time give the generated keypair a custom name, for example "ci_id_rsa" for the private key and "ci_id_rsa.pub" for the public key. For example:
Now upload your CI's public key (ci_id_rsa.pub) to Hetzner the same way you already did with your development machine's public key above.
You should now have two SSH keys in Hetzner:
Now add your CI's private key as a CI variable in GitLab. Unfortunately GitLab does not support masking CI variables in job logs that contain certain characters. If you attempt to add a CI variable that contains the forbidden characters and set the visibility to "Masked and hidden" you may get the following error:
To avoid this problem the private key can be encoded to base64 and stored as a base64 value in GitLab. To encode your CI's private key to base64 run the below command:
cat ci_id_rsa | base64 > ci_id_rsa_base64
A base64 version of your private key should now be stored in the file ci_id_rsa_base64:
Copy the contents of this file and use it to populate the "Value" field when creating a new CI variable. For more information how to create CI variables see the section CI Variables in this guide.
IP Addresses
We will need at least two IP addresses, one for the test environment and one for the production environment. In theory both environments could be hosted by the same server using the same IP address, however for complete isolation between test and production environments we will host the test environment on a completely different server using a completely different IP address.
In this guide we will reserve both IPv4 and IPv6 addresses. In most cases using only IPv4 is sufficient however for future-proofing it might be a good idea to use IPv6 also.
To reserve an IP address on Hetzner, go to your Hetzner Cloud dashboard, open the sidebar on the left and select "Servers":
Select the "Primary IPs" tab and click on "Create Primary IP":
Name your IP address, e.g. in this case "Test - IPv4" since it is the IP address the test environment will use, select a location for your IP address and select "IPv4" for the "Protocol". Note that when creating a server later, you can only assign it an IP address from the same location. So when picking a location for your IP address choose the same location where you would like your server to be located.
Now repeat the above step for the production environment. You can leave all the fields the same except the "Name", which you might assign the value "Production - IPv4".
Now repeat both of the above steps but this time using the "IPv6" protocol and adjusting the "Name" of the IP addresses accordingly, e.g. "Test - IPv6".
Finally, you should have reserved four IP addresses:
Tip
To protect against DDoS attacks, we use the Cloudflare proxy. However for this protection to be effective, you must keep your server's real IP addresses secret. More information can be found in the DDoS protection and Custom Domain sections of this guide.
We will need those IP addresses in the CI later. Create a new CI variable with the key "TEST_ENVIRONMENT_IP" and set its value to your test environment server's IP address:
Now do the same for the production environment's IP address using hte key "PRODUCTION_ENVIRONMENT_IP". Finally you should have two new CI variables to store the IP addresses of our environments:
Cloudflare Tunnel
Instead of opening our firewall to make our web server reachable, we will instead use Cloudflare tunnel. One of the advantages of using Cloudflare tunnel is that we don't need to install any TLS certificates on our web server. On the web server we will use regular HTTP on port 80. The Cloudflare tunnel automatically encrypts all traffic between Cloudflare and the web server.
We will start by creating a Cloudflare tunnel for our test environment. To create a Cloudflare tunnel, go to your Cloudflare dashboard and choose "Access" from the left sidebar. Then click on "Launch Zero Trust":
On the next screen, select your account:
On the left sidebar, open the "Networks" accordion and select "Tunnels". Click on "+ Create a tunnel":
Click on "Select Cloudflared":
Enter a name for your tunnel and click "Save tunnel":
Select "Docker" on the "Choose your environment" section. Copy the Docker command shown below and temporarily save it to a secure location (e.g. your password manager). You will need the copied value in a later section of this guide. Click "Next":
For the "Subdomain" enter "test" and choose your domain name for the "Domain" field. We will make our test environment accessible from test.epicfantasyforge.com. Leave the "Path" field blank. In the service section choose "HTTP" for the "Type" and enter "host.docker.internal:80" for the URL. Now click "Save tunnel":
Now repeat the above steps for the production environment. Leave the "Subdomain" field blank this time. Finally you should have two Cloudflare tunnels, one for the test environment and one for the production environment:
For the production environment tunnel, we need to add an additional public hostname for the subdomain "www". To do so, click on the tunnel, and click on "Edit":
Switch to the "Public Hostname" tab and click on "+ Add a public hostname":
Enter "www" in the "Subdomain" field and select your domain name for the "Domain" field. Leave the "Path" field blank. In the "Service" section, select "HTTP" for the "Type" and enter "host.docker.internal:80" for the "URL". Then click "Save hostname":
You should now have two public hostnames for your production environment:
State
Terraform saves the state of your current infrastructure after provisioning your infrastructure or making changes to it. This state must be stored somewhere. By default, Terraform stores the state locally on the machine that Terraform is executed on. Later in this guide we will run Terraform in a CI environment where it is not feasible to locally store the state since the CI environment is wiped after every run. More information about state in Terraform can be found in the State article in the Terraform documentation.
The location to store the Terraform state recommended in this guide is in the HCP Terraform Cloud. Create an account on the HCP Terraform Cloud platform.
Once you have an account, create an organization. Fill in the "Organization name" and your "Email address":
You should now be prompted to create a new workspace inside your organization. Click on the "API-Driven Workflow":
Enter a "Workspace Name", e.g. "Epic-Fantasy-Forge-Test", and click "Create":
Create another workspace for your production environment. Name it for example "Epic-Fantasy-Forge-Production". Finally you should have two workspaces:
API Token
You should now have an organization and two workspaces. Next we will need to generate an API token so your CI can log into the HCP Terraform Cloud. To do this click on your profile icon and select "Account settings":
On the left sidebar, click on "Tokens":
Click on "Create an API token":
Give your new token a description and set the token expiry. On Epic Fantasy Forge I set "No expiration" for convenience. You will need to decide if you prefer more security or more convenience. Click on "Generate token":
You should now have a HCP Terraform Cloud token. Copy the generated token and save it in a secure place, e.g. in a password manager.
Now add your HCP Terraform Cloud token as a CI variable in GitLab. For more information how to create CI variables see the section CI Variables in this guide. Make sure to select "Masked and hidden" for the "Visibility" field since your API token is a secret and should not be visible in CI job logs. Set the "Key" field to exactly "TF_TOKEN_app_terraform_io". Later the Terraform tool used in the CI will automatically use the API token stored in the environment variable with the exact name "TF_TOKEN_app_terraform_io".
Configuration
The full Terraform configuration used by Epic Fantasy Forge can be found in main.tf, user_data.yml and variables.tf in the Epic Fantasy Forge repository. For more detailed instructions on how to configure Terraform, see the below instructions.
To use Terraform, start by creating a directory named "iac", standing for "Infrastructure as Code", in the root directory of your Git repository:
mkdir iac
Inside the newly created "iac" directory, create a file named "variables.tf" and populate it with the below content:
variable "hcloud_token" {
description = "Hetzner Cloud API Token"
sensitive = true
type = string
}
variable "ipv4" {
description = "IPv4 Address"
type = string
}
variable "ipv6" {
description = "IPv6 Address"
type = string
}
variable "server_name" {
description = "Server Name"
type = string
}
variable "server_type" {
description = "Server Type"
type = string
}
variable "firewall_name" {
description = "Firewall Name"
type = string
}
Create another file inside the "iac" directory named "test.tfvars" and populate it with the below content:
ipv4 = "Test - IPv4"
ipv6 = "Test - IPv6"
server_name = "Test"
server_type = "cx22"
firewall_name = "Test"
Create another file inside the "iac" directory named "production.tfvars" and populate it with the below content:
ipv4 = "Production - IPv4"
ipv6 = "Production - IPv6"
server_name = "Production"
server_type = "cx22"
firewall_name = "Production"
Create another file inside the "iac" directory named "main.tf".
HCP Terraform Cloud
To configure Terraform to use the HCP Terraform Cloud as the backend where to store the state add the below to the main.tf configuration file. Replace the "organization" and "workspaces" with the organization and workspace in HCP Terraform Cloud that you created earlier in this guide.
terraform {
cloud {
organization = "Epic-Fantasy-Forge"
}
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = "~> 1.45"
}
}
}
There is no need to configure the HCP Terraform Cloud API token since Terraform will automatically take it from the environment variable "TF_TOKEN_app_terraform_io".
Hetzner
Configure Terraform to take the Hetzner API token from a variable. We will later inject this variable as an environment variable.
provider "hcloud" {
token = var.hcloud_token
}
SSH Keys
Configure Terraform to use the SSH Keys we uploaded to Hetzner earlier in this guide:
data "hcloud_ssh_key" "ci" {
name = "CI"
}
data "hcloud_ssh_key" "desktop" {
name = "Desktop"
}
IP Addresses
Configure Terraform to use the IP addresses we reserved in Hetzner earlier in this guide:
data "hcloud_primary_ip" "ipv4" {
name = var.ipv4
}
data "hcloud_primary_ip" "ipv6" {
name = var.ipv6
}
Firewall
To secure our server, we can configure a firewall in Hetzner. We will open the SSH port and allow pings. We don't need to open the HTTP and HTTPS ports since we will use Cloudflare tunnel to make our web server reachable.
resource "hcloud_firewall" "firewall" {
name = var.firewall_name
rule {
description = "Ping"
direction = "in"
protocol = "icmp"
source_ips = [
"0.0.0.0/0",
"::/0"
]
}
rule {
description = "SSH"
direction = "in"
protocol = "tcp"
port = "22"
source_ips = [
"0.0.0.0/0",
"::/0"
]
}
}
Servers
Add the below configuration to provision a server on Hetzner:
resource "hcloud_server" "server" {
name = var.server_name
server_type = var.server_type
location = "hel1"
image = "fedora-41"
ssh_keys = [data.hcloud_ssh_key.ci.id, data.hcloud_ssh_key.desktop.id]
firewall_ids = [hcloud_firewall.firewall.id]
user_data = file("user_data.yml")
public_net {
ipv4 = data.hcloud_primary_ip.ipv4.id
ipv6 = data.hcloud_primary_ip.ipv6.id
}
}
The server type "cx22" is the cheapest VPS Hetzner offers. It comes with a shared virtual CPU with 2 CPU cores, 4GB of RAM and 40GB of NVMe SSD storage space. It costs less than €5 per month. Since we will rent two of these the total cost will be a little less than €10 per month. Such a small VPS is sufficient for low traffic websites.
It is up to you what values to set for "server_type", "location" and "image". The Epic Fantasy Forge servers are located in Helsinki, Finland. To keep the development environment and server environment as close as possible to each other, the image "fedora-40" was chosen for Epic Fantasy Forge since Fedora Linux is the recommended development environment operating system in this guide.
To simplify our "Infrastructure as Code" process, we will not store any important data on these servers. All user data will be stored in a database hosted by a third-party. In this way our servers can be very easily deleted and re-created since we don't need to worry about backups or restore backups upon re-provisioning.
In the above configuration we also added a "user_data" configuration which will setup the servers once they are provisioned. This configuration file does not exist yet but we will create it in the next section of this guide.
Server Initialization
To initialize your servers after creation, create a file name "user_data.yml" in your iac directory. Add the below as the first line in the file:
#cloud-config
Users
To add users to your servers after provisioning, add the below content to the "user_data.yml" file:
users:
- name: henrik
sudo: ALL=(ALL) NOPASSWD:ALL
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIaQHMfBWMzSvRgrTz5q5o8OyZZD30nDsbtob3o8mIEIh8m/ZWbQ+aA6FVb8j4bcoUiVTD5Kl/0DtJpRV5KaK8T3cMLmsJ4uyziQRgXyx5GF3L2kINtnuxEqB1147P0GD+G+TdQSG2yV4ZXjRdig89zcJL1taXfSnyiqEc0cysNqtg3aHqY4FzNYBHSIcaqRDpH/XE2TAN1fneZs3/r7MT1TA2cYsNe6VHspp9qQksLPvEJi5j+7kkCWefRnRMheR7z3R45EeWQ0OuueM8ZhxRIRC97tf70A6kqeG/PbjdWRoCXZac5FdjqayGujHBrnCxAs765pXtqrzXKkFIbv9fTLV9KU0Wgnw6CrAxEHv6rwPQYlwYrmL8UTE4oVC3m78GK9OSkk3+MAoeYpY4eGtnHP7dLl2U7oRje750NJ4TSjK+ggDdMfxDU2lpMjHw4c3eT1MqAsrPeVGBHVCxOQ/myFXiyJ/LOcNg8MFU8WbCGGRoiA/GfaVnDy+Ov+M87i0=
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXthiB/Im+NnHAmCYDQ1neJWAp0lwzq93E/l1m10auCTWRbkE/KA2n+/bwh4G+h5vSDy/aw8jqH7pfVRMseKP8YvNn7teifNGJKSzV5h4fvIZPlpGpg4AaMFDw4M9tzqrS/NFvlSyX6ajciet5TEHqSqNG9v58CHM+wC+MZoolP3iKmkgRUHd4LSCnsSqER5LSDgYWSHtqoVYgisuzH68K9jH39+cHxuC1vMHGUWHq5otioNjNefxig16rsQgrN5mNuJ51ERjLjoxHMB5AhrViOp7zKeezFH5XXxyIrbV+WpmScqLfSje4/b6ZU4zER0tn3cd5iIOHGWZJFcvmJJlboRQKcM3aUVbxyTa4v0Zls9tMm1I3S9n2DZ/E+QimY4VPQYjWNhgPckntx+ACy5bt3pU71AqbgqNVK7kf/e2TYFu8opiBj/7mAuvQoGZfbLcWhSG7XTg3S92h5SU6ayVRvr3wDdku7aU1GGOsnN/oyvyKkssuDiD+KAJO+yiyaE8=
- name: deployer
sudo: ALL=(ALL) NOPASSWD:ALL
groups: docker
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIaQHMfBWMzSvRgrTz5q5o8OyZZD30nDsbtob3o8mIEIh8m/ZWbQ+aA6FVb8j4bcoUiVTD5Kl/0DtJpRV5KaK8T3cMLmsJ4uyziQRgXyx5GF3L2kINtnuxEqB1147P0GD+G+TdQSG2yV4ZXjRdig89zcJL1taXfSnyiqEc0cysNqtg3aHqY4FzNYBHSIcaqRDpH/XE2TAN1fneZs3/r7MT1TA2cYsNe6VHspp9qQksLPvEJi5j+7kkCWefRnRMheR7z3R45EeWQ0OuueM8ZhxRIRC97tf70A6kqeG/PbjdWRoCXZac5FdjqayGujHBrnCxAs765pXtqrzXKkFIbv9fTLV9KU0Wgnw6CrAxEHv6rwPQYlwYrmL8UTE4oVC3m78GK9OSkk3+MAoeYpY4eGtnHP7dLl2U7oRje750NJ4TSjK+ggDdMfxDU2lpMjHw4c3eT1MqAsrPeVGBHVCxOQ/myFXiyJ/LOcNg8MFU8WbCGGRoiA/GfaVnDy+Ov+M87i0=
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXthiB/Im+NnHAmCYDQ1neJWAp0lwzq93E/l1m10auCTWRbkE/KA2n+/bwh4G+h5vSDy/aw8jqH7pfVRMseKP8YvNn7teifNGJKSzV5h4fvIZPlpGpg4AaMFDw4M9tzqrS/NFvlSyX6ajciet5TEHqSqNG9v58CHM+wC+MZoolP3iKmkgRUHd4LSCnsSqER5LSDgYWSHtqoVYgisuzH68K9jH39+cHxuC1vMHGUWHq5otioNjNefxig16rsQgrN5mNuJ51ERjLjoxHMB5AhrViOp7zKeezFH5XXxyIrbV+WpmScqLfSje4/b6ZU4zER0tn3cd5iIOHGWZJFcvmJJlboRQKcM3aUVbxyTa4v0Zls9tMm1I3S9n2DZ/E+QimY4VPQYjWNhgPckntx+ACy5bt3pU71AqbgqNVK7kf/e2TYFu8opiBj/7mAuvQoGZfbLcWhSG7XTg3S92h5SU6ayVRvr3wDdku7aU1GGOsnN/oyvyKkssuDiD+KAJO+yiyaE8=
In the above configuration we create one general user for administrate purposes, named "henrik". Additionally we create a user the CI will use, named "deployer". For both users, replace the "ssh_authorized_keys" above with your own public keys, for example the public key from your development machine and your CI's public key. In the SSH Keys section above we create a separate SSH keypair for the CI. If you followed this guide then you can find your development machine's public SSH key in the file "~/.ssh/id_rsa.pub" and your CI's public key in the file "~/.ssh/ci_id_rsa.pub".
Warning
Don't use the public keys provided in this example, or any other example, for populating the field "ssh_authorized_keys". Any public key placed in the field "ssh_authorized_keys" is able to log into your server. More specifically anyone that has the matching private key matching that particular public key.
Packages
To pre-install some packages and automatically update any out-of-date packages, add the below configuration to "user_data.yml":
package_update: true
package_upgrade: true
packages:
- curl
- docker
- git
- neovim
- pip
- postgresql
- python
runcmd:
- sed -i -E 's/#?PermitRootLogin prohibit-password/PermitRootLogin no/' /etc/ssh/sshd_config
- sed -i -E 's/#?PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
- systemctl enable docker
- systemctl start docker
- systemctl restart sshd
In a later section of this guide we will need Docker, so we pre-install it onto your servers. In future we may also need to modify some configuration on the servers or inspect some log files. For this purpose we will use the CLI editor Neovim, that is why we also pre-install it with the above configuration.
Commands
After provisioning, we want to run some commands on our servers to pre-configure them. To do this add the below content to "user_data.yml":
runcmd:
- sed -i -E 's/#?PermitRootLogin prohibit-password/PermitRootLogin no/' /etc/ssh/sshd_config
- sed -i -E 's/#?PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
- systemctl enable docker
- systemctl start docker
- systemctl restart sshd
The above sed commands modify your SSH configuration file to prohibit root login. Additionally password authentication is disabled. This means you can only log into your servers by using public key authentication.
Furthermore we enable and start the Docker service and additionally restart the SSH daemon (for the configuration modifications to take effect).
Execution
Your Terraform configuration is now ready. It can be executed manually on your development machine and additionally automatically in your CI. To test the configuration works, let's first test it manually.
Whilst running Terraform some temporary files will be generated. To not commit them to your Git repository by accident, add a file name ".gitignore" in your "iac" directory and populate it with the following content:
.terraform
Local
Environment Variables
For Terraform to use the HCP Terraform Cloud to store the state, you need to set the environment variable "TF_TOKEN_app_terraform_io" with your HCP Terraform Cloud token. We should not use the usual way of setting environment variables since otherwise your secret token might be visible in your shell's command history. Instead, we read in the HCP Terraform Cloud token as below:
read -s -p "Enter HCP Terraform Cloud Token:" TF_TOKEN_app_terraform_io
On the prompt enter your HCP Terraform Cloud token. Then export this new variable to be available not just by this shell but also by all subprocesses launched from this shell:
export TF_TOKEN_app_terraform_io
We also need to store the Hetzner API token in an environment variable. We can follow a similar process as we already did for the HCP Terraform Cloud token above:
read -s -p "Enter Hetzner API Token:" HETZNER_API_TOKEN
export HETZNER_API_TOKEN
To switch between your test and production environments, we set the environment variable "TF_WORKSPACE". To let Terraform work on your test environment, set the environment variable to "Epic-Fantasy-Forge-Test":
export TF_WORKSPACE="Epic-Fantasy-Forge-Test"
Initialization & Planning
Now that we have set the necessary environment variables, we can initialize Terraform:
terraform init
Next we can let Terraform plan the infrastructure changes. If Terraform detects a potential problem it will inform us now before any changes are actually made.
terraform plan -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=test.tfvars
If everything goes well Terraform should inform you now what changes to the infrastructure it will make if you apply the plan.
Apply Plan
To apply the plan Terraform generated run the below command:
terraform apply -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=test.tfvars
When prompted for confirmation to apply the plan type "yes" and hit the Enter key. Now let's also provision the production environment:
export TF_WORKSPACE="Epic-Fantasy-Forge-Production"
terraform plan -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=production.tfvars
terraform apply -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=production.tfvars
After some delay Terraform should now have created the two servers in Hetzner:
The IP addresses we reserved earlier should now also be assigned to the two new servers:
Terraform should have also created Firewalls in Hetzner with the rules we defined in the configuration file above. The firewalls should be applied to the two new servers:
SSH
You should now be able to SSH into your test and production environment servers:
ssh <username>@<IP address>
Replace <username>
with a user you configured in the section Users above. Replace <IP address>
with your server's IP address.
You should not be prompted for a password since you earlier in this guide added your development machine's public key to the SSH authorized keys list in the Terraform configuration.
If this is the first time logging into your new servers you will be prompted to confirm whether you really want to connect. Type "yes" on this prompt and hit the Enter key.
Tip
Despite creating the DNS records earlier in the DNS Records section above, we cannot ssh into our servers using the real domains, e.g. "ssh [email protected]" and "ssh [email protected]" both don't work.
The reason for this is that we set the "Proxy status" to "Proxied" for the DNS records we created earlier. By default, Cloudflare only proxies requests on certain ports, e.g. the web ports 80 and 443, but not other ports that end users don't need, such as the SSH port 22. This is a security feature, so the general public can't attempt to SSH into your servers. Only people who know your server's IP addresses can SSH in, i.e. ideally only yourself.
For more information about which ports are proxied by default, see the Network ports page in the Cloudflare documentation.
For convenience, so you don't have to remember your server's IP addresses, you can create an alias for those IP addresses in your system's host file. To do this open the hosts file with sudo rights, for example with Neovim:
sudo nvim /etc/hosts
Then add the following lines to create easy to remember aliases for those IP addresses:
<IP address> production
<IP address> test
Replace <IP address>
with your server's actual IP addresses. Now you should be able to ssh into your servers with those convenient aliases:
Warning
Don't set your alias to your actual domain name. Otherwise the HTTP and HTTPS requests from your development machine to your website would also go directly to your server rather than via Cloudflare.
Whilst normally this is fine, it could lead to a situation where your website is unreachable/down for end users due to a DNS misconfiguration or a Cloudflare issue, and you wouldn't even know since your website works fine when accessed from your development machine.
CI
To connect the Cloudflare tunnels to our test and production servers, we need to store the Cloudflare tunnels tokens as CI variables in the CI. Create a new CI variable with the key "CLOUDFLARE_TUNNEL_TEST" and set the value to the token of the docker command you copied earlier for the test environment in the Cloudflare Tunnel section above. Do the same for the production environment using the key "CLOUDFLARE_TUNNEL_PRODUCTION" instead:
Finally you should have two additionally CI variables in your CI to store the Cloudflare tunnel tokens:
To automate the provisioning of your infrastructure, add new stages to your CI pipeline in .gitlab-ci.yml:
stages:
- provision-test-environment
- provision-production-environment
include:
- local: "ci/provision-production-environment.yml"
- local: "ci/provision-test-environment.yml"
Add the new file "provision-test-environment.yml" in the "ci" directory and populate it with the below content:
provision-test-environment:
before_script:
- cat $CI_PRIVATE_KEY | base64 -d > ~/ci_private_key
- chmod og= ~/ci_private_key
- export CI_PRIVATE_KEY=~/ci_private_key
- export TF_WORKSPACE="Epic-Fantasy-Forge-Test"
- apk update && apk add openssh-client
image:
name: hashicorp/terraform:latest
entrypoint: [""]
rules:
- if: $CI_COMMIT_BRANCH == "main"
script:
- PLAN_RESULT=0
- cd iac
- terraform init -input=false
- terraform plan -input=false -detailed-exitcode -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=test.tfvars || PLAN_RESULT=$?
- if [ $PLAN_RESULT == 2 ]; then APPLY_INFRASTRUCTURE_CHANGES="YES"; elif [ $PLAN_RESULT == 1 ]; then return 1; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then terraform apply -input=false -auto-approve -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=test.tfvars; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then sleep 600; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then ssh -i $CI_PRIVATE_KEY -o StrictHostKeyChecking=no deployer@$TEST_ENVIRONMENT_IP "docker run -d --add-host host.docker.internal:host-gateway cloudflare/cloudflared:latest tunnel --no-autoupdate run --token $CLOUDFLARE_TUNNEL_TEST"; fi
stage: provision-test-environment
Add the new file "provision-production-environment.yml" in the "ci" directory and populate it with the below content:
provision-production-environment:
before_script:
- cat $CI_PRIVATE_KEY | base64 -d > ~/ci_private_key
- chmod og= ~/ci_private_key
- export CI_PRIVATE_KEY=~/ci_private_key
- export TF_WORKSPACE="Epic-Fantasy-Forge-Production"
- apk update && apk add openssh-client
image:
name: hashicorp/terraform:latest
entrypoint: [""]
rules:
- if: $RELEASE == "Web"
script:
- PLAN_RESULT=0
- cd iac
- terraform init -input=false
- terraform plan -input=false -detailed-exitcode -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=production.tfvars || PLAN_RESULT=$?
- if [ $PLAN_RESULT == 2 ]; then APPLY_INFRASTRUCTURE_CHANGES="YES"; elif [ $PLAN_RESULT == 1 ]; then return 1; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then terraform apply -input=false -auto-approve -var="hcloud_token=$HETZNER_API_TOKEN" -var-file=production.tfvars; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then sleep 600; fi
- if [ "$APPLY_INFRASTRUCTURE_CHANGES" == "YES" ]; then ssh -i $CI_PRIVATE_KEY -o StrictHostKeyChecking=no deployer@$PRODUCTION_ENVIRONMENT_IP "docker run -d --add-host host.docker.internal:host-gateway cloudflare/cloudflared:latest tunnel --no-autoupdate run --token $CLOUDFLARE_TUNNEL_TEST"; fi
stage: provision-production-environment
The above CI jobs only apply the Terraform plan if it is really needed, for example if your infrastructure configuration has changed. If any infrastructure changes are necessary this CI stage waits 5 minutes before completing. This is to allow for some grace time before proceeding to the next CI stage since there may be some delay after Terraform completes until the "user_data.yml" script completes.
Whilst normally long delays and sleeps should be avoided in CI pipelines, an exception is made here for stability/reliability reasons. If this 5 minute sleep executed were executed on every CI job run, then it would be a problem, however since the 5 minute sleep is only executed if any infrastructure changes are necessary, it is something we can live with. Usually this CI stage will complete in a matter of seconds, only rarely will it execute the 5 minute sleep.
For the provisioning of the test environment, the CI stage is only executed on changes to the main branch, not on development branches. For the provisioning of the production environment, the CI stage is only executed on web release jobs.