I made a debt calculator web app to dive into cloud computing

This year, I wanted to make a point of learning a couple of new skills, and probably the most important of the skills I chose was learning how to work within a virtual private cloud. Google Cloud Platform (GCP) has been steadily increasing its market share of the cloud computing industry for the last several years, so much so that today, it isn’t uncommon to hear of large, Fortune 500 companies entrusting GCP with their most sensitive operations data and processes.

Bearing all of that in mind, I decided that I would focus primarily on learning how to use GCP this year, and it’s been a really interesting project. I even decided to register for their Associate Cloud Engineer exam and will be taking it next month.

I have a good level of familiarity with Dash, a Python framework that allows you to easily build responsive, elegant web apps, and so I figured that a good first project in GCP would be to build an app and deploy it to a custom domain name using a service like GCP’s Cloud Run. So I did! The app is a debt calculator that allows a user to plug in what debts they owe and visualize the payoff timeline for their situation.

In this post, I’ll walk you through how I deployed a debt payoff calculator web app hosted at indentured.services (pun absolutely, 100% intended). You can also take a look at the app’s code on my Github.

Deployment

Since I want to focus on deployment here, I won’t get into the details of how the app itself works (maybe in another post later). Still, here is a high level overview of how the app is organized:

├── cloudbuild.yaml
├── Dockerfile
├── README.md
├── requirements.txt
└── source
    ├── __init__.py
    ├── app.py
    ├── assets
    │   ├── custom.css
    │   ├── favicon.ico
    │   ├── fontawesome
    │   └── table-styles.css
    ├── base.py
    ├── components
    │   ├── __init__.py
    │   └── callbacks.py
    └── utils
        ├── __init__.py
        ├── constants.py
        └── helpers.py

The code in source tells the app what components to create, how to organize them, and what to do with all of the various inputs and outputs scattered throughout. In this post, we’ll be focusing on the files outside of source. In particular, the Dockerfile, cloudbuild.yaml, and requirements.txt.

Prerequisites

My goal was to deploy this app on a domain name I had purchased for it a while back, indentured.services. I wanted to make the process of pushing changes to the app as simple as possible, so ideally this deployment would respond to updates to the main branch of the app’s repository on Github. Before I could begin, there were a few things I needed:

  • A GCP project with billing enabled
  • The gcloud CLI installed
  • Verification of my ownership of indentured.services via the Google Search Console

These were all simple enough to sort out, especially since GCP gives new users a free 90-day trial with $300 of credit to play around with. To verify my ownership of the domain, I just needed to add a TXT DNS record at my DNS provider’s control panel containing a unique string provided by Google. It was pretty painless.

The Dockerfile

Cloud Run seemed like the best option for my use case since it is designed for containerized deployments like this one. It also has a free tier with a considerable amount of compute/memory before you are even charged, so barring quickly gaining a bazillion users, I shouldn’t really have to pay for the deployment beyond the domain name. (I still set a budget though, just in case!)

The Dockerfile specifies the steps that are taken to build the container that the app runs out of. The ELI5 is that it’s a set of instructions on how to build a tiny, virtual computer whose entire purpose is to run the app and only run the app.

Base Image

I wanted to keep the image as small as possible, and so I decided to build the container from the python:3.12-slim image, which gets me Python 3.12 nicely wrapped up in a minimal Linux distribution:

FROM python:3.12-slim

Staying up to date and tidy, installing a C compiler

Next, for the app to work properly, I want the most up to date versions of the Linux packages that the container uses. I also need to install gcc, which allows for compiling Python packages that have C extensions, like numpy.

Once gcc is installed, I won’t be installing any additional Linux packages, so I can optimize the image’s size by deleting the state information for the package resources at /var/lib/apt/lists/. All of this is done with:

RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

Staying secure

It’s widely considered a bit stupid to run everything from root, so I create a non-privileged user account that will actually do the work of running the application. This greatly improves the container’s security and ensures that the app only gets the bare minimum permissions required to run.

RUN useradd -m appuser

Environment setup

Next, I prepare the environment to run the app by installing all the required dependencies. I move to the /app directory, copy the requirements.txt file, and pip install all of the app’s dependencies:

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

Copy source code and give non-root ownership

At this point, the build instructions are just about finished. The app’s source code is copied into the container:

COPY source/ ./source/
RUN mkdir -p ./assets
COPY source/assets/ ./assets/

And then I give the non-root user created a couple steps back ownership over the source code and switch to the non-root user context:

RUN chown -R appuser:appuser /app
USER appuser

Expose the port that the app listens on and start everything up!

All that’s left in the build is to expose port 8080, which is the default port for Cloud Run services, and start the app, which is accomplished with gunicorn:

EXPOSE 8080
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 source.app:server

In this case, a single worker is enough for the container’s CPU and memory allocation. I have 8 threads for handling simultaneous requests and disable worker timeout since that’s all handled by Cloud Run.

cloudbuild.yaml

So at this point, I have a reliable way to build a container image that runs the app. I want to continue adding features in the future, and so I need an equally reliable CI/CD pipeline that will push changes to the app (i.e., rerun the build) as soon as I push changes to the main branch of its underlying repository on GitHub. The cloudbuild.yaml file is one piece of what makes this work smoothly.

The cloudbuild.yaml file is a set of instructions for Cloud Run that builds and deploys a container image:

steps:
  # Build the container image
  - name: "gcr.io/cloud-builders/docker"
    args: ["build", "-t", "gcr.io/$PROJECT_ID/indentured-services", "."]

  # Push the container image to Container Registry
  - name: "gcr.io/cloud-builders/docker"
    args: ["push", "gcr.io/$PROJECT_ID/indentured-services"]

  # Deploy container image to Cloud Run
  - name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
    entrypoint: gcloud
    args:
      - "run"
      - "deploy"
      - "indentured-services"
      - "--image=gcr.io/$PROJECT_ID/indentured-services"
      - "--region=us-central1"
      - "--platform=managed"
      - "--allow-unauthenticated"
      - "--memory=1Gi"
      - "--cpu=1"
      - "--timeout=300"
      - "--max-instances=10"
      - "--min-instances=1"
      - "--port=8080"
      - "--set-env-vars=GAE_ENV=standard"

images:
  - "gcr.io/$PROJECT_ID/indentured-services"

options:
  logging: CLOUD_LOGGING_ONLY

So what does all of this mean? In the first step, I’m telling Cloud Build to build the container image:

- name: "gcr.io/cloud-builders/docker"
  args: ["build", "-t", "gcr.io/$PROJECT_ID/indentured-services", "."]

The gcr.io/cloud-builders/docker bit refers to Google’s premade container image which already has the Docker CLI installed. So what this step does is run the command docker build -t gcr.io/$PROJECT_ID/indentured-services . from that premade container, which builds the container image that runs my app from the current working directory (the root directory of the repository) with the tag gcr.io/$PROJECT_ID/indentured-services, where $PROJECT_ID is my selected project ID for the app in GCP. Cloud Run knows how to build the container I want because it can see the Dockerfile I created from the previous section.

In the next step, I push the container image to GCP’s registry:

- name: "gcr.io/cloud-builders/docker"
  args: ["push", "gcr.io/$PROJECT_ID/indentured-services"]

So from the same premade container with the Docker CLI, this tells Cloud Run to run docker push gcr.io/$PROJECT_ID/indentured-services, which adds my app’s container image to the Google Container Registry, allowing it to be pulled and displayed to a user when they visit the indentured.services URL.

The final step is the actual deployment:

- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
  entrypoint: gcloud
  args:
    - "run"
    - "deploy"
    - "indentured-services"
    - "--image=gcr.io/$PROJECT_ID/indentured-services"
    - "--region=us-central1"
    - "--platform=managed"
    - "--allow-unauthenticated"
    - "--memory=1Gi"
    - "--cpu=1"
    - "--timeout=300"
    - "--max-instances=10"
    - "--min-instances=0"
    - "--port=8080"
    - "--set-env-vars=GAE_ENV=standard"

This runs the deployment command from Google’s official container with the gcloud CLI installed. It tells GCP to deploy the freshly built container in a specific region, to allocate 1 vCPU and 1 GB of memory, and to allow unauthenticated access to the app, among other things. Once this step runs, the app is available for anyone to access through the mapped domain.

The final lines declare what image the build produces and specify that logging should happen only in Google Cloud Logging, which saves me the time and money it would take to run a Cloud Storage instance for storing the container’s logs:

images:
  - "gcr.io/$PROJECT_ID/indentured-services"

options:
  logging: CLOUD_LOGGING_ONLY

Cloud Build Trigger

To complete the CI/CD pipeline, I created a Cloud Build Trigger that’s linked up with the app’s Github repository. After connecting the repository and authenticating with my Github account, I specified the trigger event should be whenever there is a push to the main branch. With the creation of the trigger, the CI/CD pipeline is fully automated and pushing enhancements and bug fixes to the app is as simple as a git push.

Conclusion

Deploying this app on GCP has been a really interesting learning experience. I’ve gotten my hands dirty with several key cloud concepts:

  • Building containerized applications with Docker (a bit more straight forward than expected)
  • Setting up automated CI/CD pipelines with Cloud Build
  • Configuring serverless deployments with Cloud Run
  • Managing custom domains in cloud environments

What started as a “I should probably learn cloud stuff this year” goal has turned into a fully functional web app with a professional deployment pipeline.

I’m also thinking about how to expand this project with other GCP services; maybe Cloud Functions for some backend processes, Cloud Storage for saving user data, or even a proper database integration. I’m also curious about adding monitoring tools to see how people are actually using the calculator.

If, by chance, you’re also trying to level up your cloud skills, I highly recommend picking a small project like this and just diving in. I’ve used a few video course resources and a couple of books here and there, but my experience has been that there’s no substitute for the “ah ha” moments you get when taking something from your local machine all the way to production.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • What do the Fibonacci sequence and staircases have in common?
  • How to Use scikit-learn Methods with statsmodels Estimators
  • State Space Time Series Analysis - Part 1
  • Introduction to Causal Inference - Part 4
  • Introduction to Causal Inference - Part 3