go-gin-react-playground
This project is a simple petshop-style application. It is meant to serve as a personal playground for experimenting with the tech and as a framework for future,
more serious applications. It consists of a frontend written in React (although I consider myself a mediocre frontend dev) and a backend written in Go. Backend utilises some popular Go libraries, such as Gin and GORM. The project is meant to be deployed on Kubernetes and provides a Helm chart to do so. It also has a fully featured Gitlab pipeline to help with that. Some other features include database schema migration through Liquibase, routing production traffic through Cloudflare, authorization to REST API through API tokens stored in Redis and a rate-limiter.
Build
Backend
For local run:
CGO_ENABLED=0 go build
For deployment:
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build
Frontend
cd frontend/
npm install
npm run-script build
Run locally
Backend
docker-compose up
./go-gin-react-playground --properties dev/properties.yml
# to cleanup:
docker-compose down && rm -rf _docker_compose_volumes/
Use the following commands to test API:
GOGIN_HOST="http://localhost:5000"
# retrieve users
curl -s $GOGIN_HOST/api/v1/user | jq
curl -s $GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50 | jq
# get access token
TOKEN=$(curl -X POST -H "Content-Type: application/json" -d '{"username":"username","password":"password"}' \
-s $GOGIN_HOST/api/v1/login | jq -r '.token')
# add user
curl -s -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" \
-d '{"name":"xxx","creditCards":[{"number":"0000 0000 0000 0000"}]}' $GOGIN_HOST/api/v1/user | jq
# modify user
curl -v -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" \
-d '{"name":"John Doe","creditCards":[{"number":"1111 1111 1111 1111"}, {"number":"2222 2222 2222 2222"}]}' \
$GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50
# delete user
curl -v -X DELETE -H "Authorization: Bearer $TOKEN" \
$GOGIN_HOST/api/v1/user/0098d5b6-5986-4ffe-831f-5c3a59aeef50 | jq
# to generate more test data
dev/random_data.sh "$GOGIN_HOST" "username" "password"
Frontend
cd frontend/
npm install
npm start
Visit http://localhost:8080/
in browser.
Deploy on Kubernetes
Prepare cluster
Install nginx ingress controller
Install nginx ingress controller using Helm. Value of replicaCount
dependes on your needs, and can be increased.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create namespace ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace=ingress-nginx \
--set controller.service.externalTrafficPolicy="Local" \
--set controller.replicaCount=1
Wait for the controller to wake up:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
In Docker Desktop
the Ingress controller listens on 127.0.0.1
, on ports 80
and 443
. In cloud environments, such as Azure - public IP address is allocated for the needs of the controller. It may introduce some additional costs. Keep in mind the Ingress controller works all the time, no matter if any Ingress resource is actually deployed at the time. If no actual Ingress resources are deployed - the Ingress controller redirects all the requests to the default-backend
that just returns 404 and a fake certificate.
Create a namespace for the project
kubectl create namespace go-gin
kubectl config set-context --current --namespace=go-gin
Deploy development version on Docker Desktop
Build Docker images locally with dev
tag:
dev/build.sh
Start development environment. The script will upload application secrets to the cluster, start external dependiencies such as redis and Postgres as well as feed the database with some test data:
dev/kubernetes-env/up.sh
Deploy:
deployment/deploy.sh
In order to clean up, simply run:
deployment/undeploy.sh
dev/kubernetes-env/down.sh
Deploy development version on any cluster
Build Docker images locally with dev
tag:
dev/build.sh
Push Docker images to the registry. Both your workstation and the target cluster should be able to access the specified registry:
dev/push.sh 192.168.1.100:32000
Start development environment. The script will upload application secrets to the cluster, start external dependiencies such as redis and Postgres as well as feed the database with some test data:
dev/kubernetes-env/up.sh
Deploy (remember to specify the full names of the pushed images):
deployment/deploy.sh \
--set backend.imageName="192.168.1.100:32000/go-gin-react-playground/backend" \
--set frontend.imageName="192.168.1.100:32000/go-gin-react-playground/frontend"
In order to clean up, simply run:
deployment/undeploy.sh
dev/kubernetes-env/down.sh
Deploy production version in the cloud
Provide access to Gitlab registry
- Generate a Gitlab personal access token with
read_registry
scope
- Generate
AUTH_STRING
with echo -n '<USERNAME>:<ACCESS_TOKEN>' | base64
- Create a
docker.json
file:
{
"auths": {
"registry.gitlab.com": {
"auth": "<AUTH_STRING>"
}
}
}
kubectl create secret generic gitlab-docker-registry --namespace=kube-system \
--from-file=.dockerconfigjson=./docker.json --type="kubernetes.io/dockerconfigjson"
Create a secret with all the credentials
We assume the app running in cloud makes use of external services provided as SaaS, such as Amazon RDS or Azure Database for PostgreSQL and the external services are not the part of Kubernetes cluster itself.
In this case we need to explicitly create a secret backend-secrets
containing all the confident properties required to run the app.
kubectl create secret generic backend-secrets \
--from-literal=postgres_dsn="<POSTGRES_DSN>" \
--from-literal=redis_dsn="<REDIS_DSN>" \
--from-literal=api_username="<API_USERNAME>" \
--from-literal=api_password="<API_PASSWORD>"
(Obviously replacing all the <placeholders>
with the proper values)
Generate HTTPS certificate
Either generate a self-signed cert
export DOMAIN="example.com"
openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout key.pem -out cert.pem -subj "/CN=$DOMAIN/O=$DOMAIN"
Or use Let's Encrypt to generate a proper one
# brew install certbot
export DOMAIN="example.com"
sudo certbot -d "$DOMAIN" --manual --preferred-challenges dns certonly
sudo cp "/etc/letsencrypt/live/$DOMAIN/fullchain.pem" ./cert.pem && sudo chown $USER ./cert.pem
sudo cp "/etc/letsencrypt/live/$DOMAIN/privkey.pem" ./key.pem && sudo chown $USER ./key.pem
# to renew later: sudo certbot renew -q
Then just upload it
kubectl create secret tls domain-specific-tls-cert --key key.pem --cert cert.pem
It's a good idea to also improve security by telling ingress to send some additional security headers. Run:
kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: security-headers
namespace: ingress-nginx
data:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
X-XSS-Protection: "0"
Strict-Transport-Security: "max-age=63072000; includeSubDomains; preload"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
add-headers: "ingress-nginx/security-headers"
EOF
Deploy
deployment/deploy.sh \
--set app.version="v1.0.4" \
--set backend.imageName="registry.gitlab.com/mkorman/go-gin-react-playground/backend" \
--set frontend.imageName="registry.gitlab.com/mkorman/go-gin-react-playground/frontend" \
--set images.pullSecret="kube-system/gitlab-docker-registry" \
--set ingress.hostname="example.com" \
--set ingress.stictHostCheck=true \
--set ingress.useHttps=true \
--set ingress.tlsCertName="domain-specific-tls-cert"
Clean up
deployment/undeploy.sh
Graylog can be deployed with:
deployment/extras/graylog/deploy.sh
Graylog UI runs on port 9000
. Default credentials are admin/admin
. By default, UDP ports 12201
(gelf) and 1514
(syslog) are opened.
In order to configure Graylog to receive messages from backend and frontend:
- Add two new input -
GELF UDP
on port 12201
and Syslog UDP
on port 1514
- Add new pipeline and attach it to the
All messages
stream.
- Add two new rules to the pipeline
Rule 1
rule "parse backend logs"
when
starts_with(to_string($message.source), "backend-")
then
let json_tree = parse_json(to_string($message.message));
let json_fields = select_jsonpath(json_tree, {
time: "$.time",
level: "$.level",
message: "$.message",
error: "$.error",
stack: "$.stack",
status: "$.status",
method: "$.method",
path: "$.path",
ip: "$.ip",
request_id: "$.request_id",
latency: "$.latency",
user_agent: "$.user_agent"
});
set_field("timestamp", flex_parse_date(to_string(json_fields.time)));
set_field("log_level", to_string(json_fields.level));
set_field("message", to_string(json_fields.message));
set_field("error", to_string(json_fields.error));
set_field("stack", to_string(json_fields.stack));
set_field("status", to_string(json_fields.status));
set_field("method", to_string(json_fields.method));
set_field("path", to_string(json_fields.path));
set_field("ip", to_string(json_fields.ip));
set_field("request_id", to_string(json_fields.request_id));
set_field("latency", to_string(json_fields.latency));
set_field("user_agent", to_string(json_fields.user_agent));
remove_field("time");
remove_field("level");
remove_field("line");
remove_field("file");
end
Rule 2
rule "receive frontend logs"
when
starts_with(to_string($message.source), "frontend-")
then
end
We also need to make the app aware of the Graylog by specifying additional flags to deployment/deploy.sh
script when deploying the app:
deployment/deploy.sh \
...
--set backend.config.remoteLogging.enabled=true \
--set frontend.config.remoteLogging.enabled=true
...
In order to clean up, run:
deployment/extras/graylog/undeploy.sh
Application is configured to automatically publish metrics in a format recognized by the Prometheus.
All you need to do is to deploy Prometheus to your cluster:
deployment/extras/prometheus/deploy.sh
It can be easily cleaned up with:
deployment/extras/prometheus/undeploy.sh
Grafana can be deployed the similar way:
deployment/extras/grafana/deploy.sh
Grafana UI runs on port 3000
.
Default credentials are admin/admin
. Address to the prometheus data source would be http://prometheus.monitoring.svc.cluster.local
.
To clean up Grafana run:
deployment/extras/grafana/undeploy.sh
(Random Notes) Deployment on Microsoft Azure
Provision AKS (Kubernetes cluster)
- Open Azure console, navigate to
Kubernetes Services
and click Add -> Add Kubernetes cluster
- Enter
Kubernetes cluster name
, Region
and choose Availability Zones
- Specify number of in the primary pool and their type. For testing environment -
1
node is enough, for production - specify 3
or more nodes for high-availability. A2_v2
is probably the cheapest node and is more than enough for testing environment. For production - choose general purpose nodes like D2s_v3
.
- In the next tab you can specify more node pools.
- In
Authentication
tab select System-assigned managed identity
. ENABLE Role-based access control (RBAC)
and DISABLE AKS-managed Azure Active Directory
- In
Networking
tab - set Network configuration
to kubenet
, make sure Enable HTTP application routing
is DISABLED. Under Network policy
you may consider choosing Calico
- it will allow you to create NetworkPolicy
resources. They WILL NOT WORK if you choose None
.
- In
Integrations
tab - DISABLE both Container monitoring
and Azure Policy
.
- After clicking
Create
a couple of resources will be provisioned - Kubernetes Services
, Virtual Network
(aks-vnet-*
), Virtual Machine Scale set
and Public IP Address
(used for egress traffic from the cluster).
- After cluster successfully provisions - you'll be able to get it's connection details by clicking
Connect
. Procedure consists of installing azure-cli
and retrieving cluster credentials through it:
brew install azure-cli
az login
az account set --subscription <SUBSCRIPTION_ID>
az aks get-credentials --resource-group <RESOURCE_GROUP> --name <CLUSTER_NAME>
Provision PostgreSQL instance
- Open Azure console, navigate to
Azure Database for PostgreSQL servers
and click New
- Select
Single server
(it claims to provide 99.99% availability)
- Enter details like
Server Name
and Location
. Under Compute + storage
select either Basic
tier for testing environment or General Purpose
if the application requires Geo-Redundancy
- Enter
Admin Username
, generate Admin Password
with something like dd if=/dev/urandom bs=48 count=1 | base64
- Create database and wait for it to provision
- Copy DSN from
Connection String
section. If you need an access from the Internet (NOT RECOMMENDED!) - add firewall rule 0.0.0.0-255.255.255.255
under Connection Security
Provision Redis instance
- Open Azure console, navigate to
Azure Cache for Redis
and click New
- Enter
DNS name
and Location
. Under Cache Type
select Basic C0
for testing environment or Standard C1
(or higher) if the application requires high-availability.
- In the next section select either
Public Endpoint
(NOT RECOMMENDED!) or Private Endpoint
. In case of private endpoint - you'll need to create a private network in the private network created for AKS cluster (aks-vnet-*
).
- Create instance and wait for it to provision. Address and password will pop up when you click
Keys: Show access keys...
Integrate Kubernetes cluster with Gitlab
Integration with Gitlab provides a basic pods monitoring on the project's page and automatic configuration of kubectl
in deployment CI jobs. Full instruction is available under https://docs.gitlab.com/ee/user/project/clusters/add_remove_clusters.html