Particle field around black hole with icon of a rocket ship launching

Part 8: Full Project Setup, Kubernetes Starter

Written August 23rd, 2024 by Nathan Frank

Particle field around black hole with icon of a checklist

Photo source by BoliviaInteligente on Unsplash

Recap

This article picks up from the seventh article: Helm and Environments in the Kubernetes Starter series.

Don't need the Full Project Setup? Skip to Project Cleanup.

Full setup hands on

This section focuses less on the conceptual, and more on the full hands on to setup the entire setup.

There's multiple steps because vault steps and secret management inherently require manual aspects to them which shouldn't be fully automated.

1. Download the codebase

TODO: download the codebase

2. Set hosts entries

Create hosts entries for the following endpoints:

Needed for the initial deploy scripts:

1127.0.0.1   sample.local
2127.0.0.1   api.node.sample.local
3127.0.0.1   vault.shared.sample.local

Needed for the Helm deploy scripts:

1127.0.0.1   api.dev.internal.sample.local
2127.0.0.1   api.qa.internal.sample.local
3127.0.0.1   api.stg.sample.local
4127.0.0.1   api.sample.local

3. Install Rancher Desktop

Install Rancher Desktop

At the time we're using Rancher Desktop 1.15.0, with Kubernetes 1.30.3, configured with moby and Traefik.

4. Optional: Install Node with NVM

Node 20 recommended installation through NVM

Node 20 contains support for .env files natively without additional packages. Building the application will leverage Node 20 in the containers that are downloaded and deployed.

This is only needed if you want to run the application locally.

5. Optional: Install the NPM package dependencies

From the folder: application/sample-node-api

Install the local package dependencies with npm install, (yarn or pnpm can be used instead), (you may need to run nvm use 20 to ensure node version 20 is selected if you have multiple node versions).

After this step package-lock.json will be modified to track the latest versions that have been installed and will look like a changed file in git.

This is only needed if you want to run the application locally.

6. Optional: Run the application locally

From the folder application/sample-node-api:

  1. Create the .env file: npm run dev.create-env-file-from-template

  2. Edit the .env file: application/sample-node-api/.env and modify any entries that contain TBD to be some value

  3. Run the local dev codebase: npm run start.dev

  4. Visit http://localhost:3000/config to verify that the application is running

  5. Shut down the server with Control + C from the terminal running npm run start.dev

This is only needed if you want to run the application locally.

7. Build the applications

From the folder: _devops run build.sh to run the build scripts for sample-nginx-web (application/sample-nginx-web/_devops/build.sh) and sample-node-api (application/sample-node-api/_devops/build.sh) and create images in your local container image registry.

This needs to be done before deploying to have images to deploy or before running locally.

8. Optional: Run the application locally through a container manager like Docker/Rancher Desktop

Make sure the build of the applications have run first

  1. From the folder application/sample-node-api/_devops run docker-run.sh to run the application in detached mode.

  2. Run docker ps to see the applications that are running.

  3. Verify by visiting http://localhost:30000/config

  4. Run the script docker-stop.sh to look up an image by name and attempts to stop and remove it.

  5. Run docker ps to see that the application isn't running anymore

Use this to verify that the application in the container stands up correctly and can serve traffic.

This is only needed if you want to run the application locally through a container run time to verify the application works in containers.

9. Deploy the applications

From the folder: _devops run deploy.sh to deploy all of the applications to the Kubernetes cluster. Some will not start until the proper secrets are updated or the Vault us unsealed.

Note: deploy.sh includes calls to status.sh to show what has been created

10. Manually unseal Vault

  1. Run kubectl exec -it -n sample-vault sample-vault-0 -- /bin/sh to connect to the vault instance

  2. Initialize Vault by running vault operator init. Record the sample output like:

    1Unseal Key 1: FPhbSLfMdagIBo0wkUtnRZ/friU9TKvii5kjcrAK0/Dg
    2Unseal Key 2: OlVinYOcMNW78+t+3wWLrXyP3ospBkTinUYE7LRAtled
    3Unseal Key 3: wlgKxeOd0IkcqXh9uiaceTeiItnWGEmrajDn/61qzqrn
    4Unseal Key 4: Dxb7jmCjW4jxq6liVxFOQPm70yxtiL879EBXNSfDPbtE
    5Unseal Key 5: p9FYcL6z1BHnzXy66lwc7/unLpNMFoi/ly7r2RZ411eJ
    6
    7Initial Root Token: hvs.7GgNIs4bhczNXqwdbCrC4L0F
    8
    9Vault initialized with 5 key shares and a key threshold of 3. Please securely
    10distribute the key shares printed above. When the Vault is re-sealed,
    11restarted, or stopped, you must supply at least 3 of these keys to unseal it
    12before it can start servicing requests.
    13
    14Vault does not store the generated root key. Without at least 3 keys to
    15reconstruct the root key, Vault will remain permanently sealed!
    16
    17It is possible to generate new unseal keys, provided you have a quorum of
    18existing unseal keys shares. See "vault operator rekey" for more information.

    Record the values that are shown in a safe place including the Root Token Your Root Token will be different after each init.

  3. Unseal the Vault by running vault operator unseal and providing one of the Unseal Keys (from above) each time it's run x3.

  4. Run exit to disconnect from the Vault instance

11. Configure the applications

  1. From the folder: _devops run configure.sh to be prompted with the configuration details needed to be entered.

  2. When prompted enter the Vault Root Token from above like: hvs.7GgNIs4bhczNXqwdbCrC4L0F

  3. When prompted enter the host to connect to (in our case hit enter to use vault.shared.sample.local)

  4. When prompted enter the port to connect to (in our case hit enter to use 8200)

  5. When prompted enter the namespace to create a secret in (in our case hit enter to use sample)

  6. When prompted enter a username for the db connection, consider something more unique than username-local-1

  7. When prompted enter a password for the db connection, consider something more unique than password-local-1!

  8. Confirm that Vault is configured:

    1. View: http://vault.shared.sample.local, enter the Root Token

    2. Explore paths like:

  9. Run status.sh to verify that the pods are ready 1/1 and status Running, if not periodically run status.sh until the crash backoff loop restarts them and they pick up the secret created by the configure.sh run.

12. Test sample applications deployed to Kubernetes

  1. Test http://sample.local in a browser, a page with a bunch if links should appear as written in: application/sample-nginx-web/data/index.html hosted by the nginx container

  2. Test the node API endpoints in a browser:

13. Test that env secrets are updated correctly

  1. Goto the API and validate that the existing secret value is there

  2. Goto Vault and review the existing secret

  3. Goto Kubernetes and review the existing secret value

    • Run kubectl get secret sample-app-node-api-local-db-connection-secret -n sample -o json | jq -r '.data.PRIVATE_NODE_API_DB_USERNAME' | base64 -d

  4. Make a change to the current secret value in Vault

    1. Login to Vault at http://vault.shared.sample.local/, use the root token from earlier

    2. Visit the secret at sample-node-api-local/details

    3. Click create new version, then the eyeball icon to reveal the existing username

    4. See the value is like: B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username2

    5. The new value will be saved.

  5. Wait 30 seconds and test that the Kubernetes value is updated

    1. Run kubectl get secret sample-app-node-api-local-db-connection-secret -n sample -o json | jq -r '.data.PRIVATE_NODE_API_DB_USERNAME' | base64 -d

    2. Notice the secret is updated to the new version like B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username3

    (Vault Secret Operator successfully saw the change in Vault and updated the Kubernetes Secret)

  6. Goto the API and validate that the old secret remains

    1. Visit http://api.node.sample.local/config and review that the previous value is there like: B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username2

    (it's desired that the old secret remains and needs a step to update the deployment)

14. Restart the container to pick up the newest secret value

  1. Delete the pod that is existing

    1. Run kubectl get pods -n sample to see the list of running pods

    2. See that one is named like sample-app-node-api-deployment-68b4794cdf-wfth7 and that it's age is longer than 1m

    3. Run kubectl delete pod sample-app-node-api-deployment-68b4794cdf-wfth7 -n sample where sample-app-node-api-deployment-68b4794cdf-wfth7 matches the name of the pod to delete.

    4. See a message like: pod "sample-app-node-api-deployment-68b4794cdf-wfth7" deleted

  2. Validate that Kubernetes brings a new pod online

    1. Run kubectl get pods -n sample to see the list of running pods

    2. See that there's two pods again, the one starting with sample-app-node-api-deployment- will have an age less than 1m and likely be 0/1 read (you may see that a node is terminating while a new sample-app-node-api-deployment pod is starting)

    3. Continue running `kubectl get pods -n sample` until the ready status changes to 1/1 and we only see 2 pods.

  3. Check the config API to validate that the updated secret is now in place

    1. Visit [http://api.node.sample.local/config](http://api.node.sample.local/config) and see that the new secret is coming through like: `B9%jM4DdQnb!=LT1RHdgMhz7T7Q!HDzF-node-local-username3`

15. Deploy multiple environments with Helm

From the folder: _devops run deploy-with-helm.sh

Note that the deploy-with-helm.sh, status-with-helm.sh, configure-with-helm.sh scripts only work on resources that are part of the sample-dev, sample-qa, sample-stg, and sample-prod namespaces.

Why don't I need to configure each environment like the sample environment above?

The shared/vault/_devops/configure.sh helper script (already run) already all the objects including ones for the various environments we'd need here, including sample usernames and passwords.

16. Test envs created with Helm

Visit these URLs to validate that the envs are available:

  1. Test DEV env

    1. Make sure the DEV env loads: DEV

    2. Make sure the DEV env shows correct configuration at: DEV /config

    3. Make sure the DEV env loads DEV /swagger

  2. Test QA env

    1. Make sure the QA env loads: QA

    2. Make sure the QA env shows correct configuration at: QA /config

    3. Make sure the QA env loads QA /swagger

  3. Test STG env

    1. Make sure the STG env loads: STG

    2. Make sure the STG env shows correct configuration at: STG /config

    3. Make sure the STG env DOES NOT load (because it's configured not to, should 404) STG /swagger

  4. Test PROD env

    1. Make sure the PROD env loads: PROD

    2. Make sure the PROD env shows correct configuration at: PROD /config

    3. Make sure the PROD env DOES NOT load (because it's configured not to, should 404) PROD /swagger

17. Optional: Update the secrets for Helm environments

Repeat steps 13 and 14 for any one of the environments created with Helm. Note that the namespace will need to change from sample to something like sample-dev and the URLs to change (http://api.dev.sample.local/config) and verify will need to be updated as well.

Wrap up

We've focused on the tactical set up process across the entire series without focusing on the conceptual aspects.

Moving along

Continue with the ninth article of the series: Project Cleanup

This series is also available with the accompanying codebase.