Part 8: Full Project Setup, Kubernetes Starter
Written August 23rd, 2024 by Nathan Frank
Photo source by BoliviaInteligente on Unsplash
Recap
This article picks up from the seventh article: Helm and Environments in the Kubernetes Starter series.
Don't need the Full Project Setup? Skip to Project Cleanup.
Full setup hands on
This section focuses less on the conceptual, and more on the full hands on to setup the entire setup.
There's multiple steps because vault steps and secret management inherently require manual aspects to them which shouldn't be fully automated.
1. Download the codebase
TODO: download the codebase
2. Set hosts entries
Create hosts entries for the following endpoints:Needed for the initial deploy scripts:
1127.0.0.1 sample.local
2127.0.0.1 api.node.sample.local
3127.0.0.1 vault.shared.sample.local
Needed for the Helm deploy scripts:
1127.0.0.1 api.dev.internal.sample.local
2127.0.0.1 api.qa.internal.sample.local
3127.0.0.1 api.stg.sample.local
4127.0.0.1 api.sample.local
3. Install Rancher Desktop
Install Rancher Desktop
At the time we're using Rancher Desktop 1.15.0, with Kubernetes 1.30.3, configured with moby and Traefik.
4. Optional: Install Node with NVM
Node 20 recommended installation through NVM
Node 20 contains support for .env files natively without additional packages. Building the application will leverage Node 20 in the containers that are downloaded and deployed.
This is only needed if you want to run the application locally.
5. Optional: Install the NPM package dependencies
From the folder: application/sample-node-api
Install the local package dependencies with npm install
, (yarn or pnpm can be used instead), (you may need to run nvm use 20
to ensure node version 20 is selected if you have multiple node versions).
After this step package-lock.json will be modified to track the latest versions that have been installed and will look like a changed file in git.
This is only needed if you want to run the application locally.
6. Optional: Run the application locally
From the folder application/sample-node-api
:
Create the
.env
file:npm run dev.create-env-file-from-template
Edit the
.env
file:application/sample-node-api/.env
and modify any entries that contain TBD to be some valueRun the local dev codebase:
npm run start.dev
Visit http://localhost:3000/config to verify that the application is running
Shut down the server with
Control + C
from the terminal runningnpm run start.dev
This is only needed if you want to run the application locally.
7. Build the applications
From the folder: _devops
run build.sh
to run the build scripts for sample-nginx-web (application/sample-nginx-web/_devops/build.sh
) and sample-node-api (application/sample-node-api/_devops/build.sh
) and create images in your local container image registry.
This needs to be done before deploying to have images to deploy or before running locally.
8. Optional: Run the application locally through a container manager like Docker/Rancher Desktop
Make sure the build of the applications have run first
From the folder
application/sample-node-api/_devops
rundocker-run.sh
to run the application in detached mode.Run
docker ps
to see the applications that are running.Verify by visiting http://localhost:30000/config
Run the script
docker-stop.sh
to look up an image by name and attempts to stop and remove it.Run
docker ps
to see that the application isn't running anymore
Use this to verify that the application in the container stands up correctly and can serve traffic.
This is only needed if you want to run the application locally through a container run time to verify the application works in containers.
9. Deploy the applications
From the folder: _devops
run deploy.sh
to deploy all of the applications to the Kubernetes cluster. Some will not start until the proper secrets are updated or the Vault us unsealed.
Note: deploy.sh
includes calls to status.sh
to show what has been created
10. Manually unseal Vault
Run
kubectl exec -it -n sample-vault sample-vault-0 -- /bin/sh
to connect to the vault instanceInitialize Vault by running
vault operator init
. Record the sample output like:1Unseal Key 1: FPhbSLfMdagIBo0wkUtnRZ/friU9TKvii5kjcrAK0/Dg 2Unseal Key 2: OlVinYOcMNW78+t+3wWLrXyP3ospBkTinUYE7LRAtled 3Unseal Key 3: wlgKxeOd0IkcqXh9uiaceTeiItnWGEmrajDn/61qzqrn 4Unseal Key 4: Dxb7jmCjW4jxq6liVxFOQPm70yxtiL879EBXNSfDPbtE 5Unseal Key 5: p9FYcL6z1BHnzXy66lwc7/unLpNMFoi/ly7r2RZ411eJ 6 7Initial Root Token: hvs.7GgNIs4bhczNXqwdbCrC4L0F 8 9Vault initialized with 5 key shares and a key threshold of 3. Please securely 10distribute the key shares printed above. When the Vault is re-sealed, 11restarted, or stopped, you must supply at least 3 of these keys to unseal it 12before it can start servicing requests. 13 14Vault does not store the generated root key. Without at least 3 keys to 15reconstruct the root key, Vault will remain permanently sealed! 16 17It is possible to generate new unseal keys, provided you have a quorum of 18existing unseal keys shares. See "vault operator rekey" for more information.
Record the values that are shown in a safe place including the
Root Token
YourRoot Token
will be different after each init.Unseal the Vault by running
vault operator unseal
and providing one of the Unseal Keys (from above) each time it's run x3.Run
exit
to disconnect from the Vault instance
11. Configure the applications
From the folder:
_devops
runconfigure.sh
to be prompted with the configuration details needed to be entered.When prompted enter the Vault
Root Token
from above like:hvs.7GgNIs4bhczNXqwdbCrC4L0F
When prompted enter the host to connect to (in our case hit enter to use
vault.shared.sample.local
)When prompted enter the port to connect to (in our case hit enter to use
8200
)When prompted enter the namespace to create a secret in (in our case hit enter to use
sample
)When prompted enter a username for the db connection, consider something more unique than
username-local-1
When prompted enter a password for the db connection, consider something more unique than
password-local-1!
Confirm that Vault is configured:
View: http://vault.shared.sample.local, enter the
Root Token
Explore paths like:
Run
status.sh
to verify that the pods are ready1/1
and statusRunning
, if not periodically runstatus.sh
until the crash backoff loop restarts them and they pick up the secret created by theconfigure.sh
run.
12. Test sample applications deployed to Kubernetes
Test http://sample.local in a browser, a page with a bunch if links should appear as written in:
application/sample-nginx-web/data/index.html
hosted by the nginx containerTest the node API endpoints in a browser:
13. Test that env secrets are updated correctly
Goto the API and validate that the existing secret value is there
Visit http://api.node.sample.local/config and take note of the secrets and values
Goto Vault and review the existing secret
Login to Vault at http://vault.shared.sample.local/, use the root token from earlier
Goto Kubernetes and review the existing secret value
Run
kubectl get secret sample-app-node-api-local-db-connection-secret -n sample -o json | jq -r '.data.PRIVATE_NODE_API_DB_USERNAME' | base64 -d
Make a change to the current secret value in Vault
Login to Vault at http://vault.shared.sample.local/, use the root token from earlier
Visit the secret at sample-node-api-local/details
Click create new version, then the eyeball icon to reveal the existing username
See the value is like:
B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username2
The new value will be saved.
Wait 30 seconds and test that the Kubernetes value is updated
Run
kubectl get secret sample-app-node-api-local-db-connection-secret -n sample -o json | jq -r '.data.PRIVATE_NODE_API_DB_USERNAME' | base64 -d
Notice the secret is updated to the new version like
B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username3
(Vault Secret Operator successfully saw the change in Vault and updated the Kubernetes Secret)
Goto the API and validate that the old secret remains
Visit http://api.node.sample.local/config and review that the previous value is there like:
B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username2
(it's desired that the old secret remains and needs a step to update the deployment)
14. Restart the container to pick up the newest secret value
Delete the pod that is existing
Run
kubectl get pods -n sample
to see the list of running podsSee that one is named like
sample-app-node-api-deployment-68b4794cdf-wfth7
and that it's age is longer than1m
Run
kubectl delete pod sample-app-node-api-deployment-68b4794cdf-wfth7 -n sample
wheresample-app-node-api-deployment-68b4794cdf-wfth7
matches the name of the pod to delete.See a message like:
pod "sample-app-node-api-deployment-68b4794cdf-wfth7" deleted
Validate that Kubernetes brings a new pod online
Run
kubectl get pods -n sample
to see the list of running podsSee that there's two pods again, the one starting with
sample-app-node-api-deployment-
will have an age less than1m
and likely be0/1
read (you may see that a node is terminating while a new sample-app-node-api-deployment pod is starting)Continue running `kubectl get pods -n sample` until the ready status changes to 1/1 and we only see 2 pods.
Check the config API to validate that the updated secret is now in place
Visit [http://api.node.sample.local/config](http://api.node.sample.local/config) and see that the new secret is coming through like: `B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username3`
15. Deploy multiple environments with Helm
From the folder: _devops
run deploy-with-helm.sh
Note that the deploy-with-helm.sh
, status-with-helm.sh
, configure-with-helm.sh
scripts only work on resources that are part of the sample-dev, sample-qa, sample-stg, and sample-prod namespaces.
Why don't I need to configure each environment like the sample environment above?
The shared/vault/_devops/configure.sh
helper script (already run) already all the objects including ones for the various environments we'd need here, including sample usernames and passwords.
16. Test envs created with Helm
Visit these URLs to validate that the envs are available:
Test DEV env
Make sure the DEV env loads: DEV
Make sure the DEV env shows correct configuration at: DEV /config
Make sure the DEV env loads DEV /swagger
Test QA env
Make sure the QA env loads: QA
Make sure the QA env shows correct configuration at: QA /config
Make sure the QA env loads QA /swagger
Test STG env
Make sure the STG env loads: STG
Make sure the STG env shows correct configuration at: STG /config
Make sure the STG env DOES NOT load (because it's configured not to, should 404) STG /swagger
Test PROD env
Make sure the PROD env loads: PROD
Make sure the PROD env shows correct configuration at: PROD /config
Make sure the PROD env DOES NOT load (because it's configured not to, should 404) PROD /swagger
17. Optional: Update the secrets for Helm environments
Repeat steps 13 and 14 for any one of the environments created with Helm. Note that the namespace will need to change from sample
to something like sample-dev
and the URLs to change (http://api.dev.sample.local/config) and verify will need to be updated as well.
Wrap up
We've focused on the tactical set up process across the entire series without focusing on the conceptual aspects.
Moving along
Continue with the ninth article of the series: Project Cleanup
This series is also available with the accompanying codebase.