This article provides a step by step guide on deploying Geolocation API to your own Google Cloud project. This article will focus on using the automation tools and scripts in deploying a FastAPI python web application to Google Cloud’s Cloud Run, a serverless containers platform.
For more information on the design and tech stack used, please refer to Building Geolocation API. In this article, we will focus on deployment.
As an overview, the project has five distinct components:
Dockerfile
that automates the process of downloading the GeoLite2 databases and building a lightweight Docker image.cloudbuild.yaml
file that defines the necessary steps for the build pipeline, which includes building the Docker image and deploying a new revision to Cloud Run.deploy.sh
shell script that sets up the Cloud Shell VM environment for deployment.For the purpose of deployment, we will only be executing deploy.sh
shell script which will take care of setting up our environment and deploying our CDKTF application. However, before we run our script, we need to manually do a few things first. Please continue reading to find out more.
Explore a live demo of the Geolocation API hosted on my Google Cloud platform, running as a Cloud Run revision. Enter a valid public IP address, whether IPv4 or IPv6, to retrieve its details. If no input is provided, the API will return details of your own IP address.
If you'd rather watch a video instead of reading a long article, here is a YouTube video that accompanies this content.
Dockerfile
to download latest GeoLite2 databases..env
file to customise and configure our environment.deploy.sh
shell script to prepare Google Cloud Shell Console VM and deploy our FastAPI application.Let's begin by going through each item on this list, one at a time.
To set up automatic database updates within our container, we need to create a MaxMind account and obtain a license key. To do this:
Keep your license key window open as we will need it shortly.
Before we begin, please make sure that you are logged in to your Google Cloud account and have a valid billing account. You may be charged for using Google Cloud services, but Google Cloud’s free tier is more than enough to test our deployment without incurring any cost.
The free tier includes a set of Google Cloud services that you can use for free, up to certain usage limits. If you exceed the usage limits, you will be charged for the additional usage.
The free tier is a great way to try out Google Cloud services and to test our deployment without incurring any cost.
If you do not want to continue using the service, our CDKTF implementation makes it very easy to delete the deployed resource in just one command.
Once you are logged in, you will be taken to the Google Cloud Platform console. From here, you can start using Google Cloud services.
To create a new Google Cloud project for deploying our geolocation service, follow these steps:
You can also select the project by clicking on Select Project dropdown button located in top-left corner of the GCP console and then selecting the project.
To learn more about creating a project on Google Cloud Platform, follow this guide.
We need to fork our Git repository because Cloud Build only allows connecting to repositories from our own GitHub account, even if the repository is public. Forking a repository creates a copy of it in our own account, which we can then clone to our Cloud Shell environment. Once the repository is cloned, we can connect it to our Cloud Build trigger.
Here is a more detailed explanation of the steps involved:
git clone <URL>
Replace <URL>
with the URL that we copied in step 5.
The repository will be cloned to our Cloud Shell environment.
In this step, we authenticate and connect GitHub repository to cloudbuild trigger.
To connect our GitHub repository to Cloud Build, follow these steps:
Do not create a trigger yet. We will do it using cdktf.
Our CDKTF stacks depend on values from .env
file for setting up our infrastructure
To set up .env
file, follow these steps:
example_env.txt
file..env
in the root directory of your cloned repository.RANDOM_ID
variable.REGION_PREFERRED
.GIT_SOURCE_REPOSITORY
.FASTAPI_CORS_ORIGINS
variable accordingly.GEOIPUPDATE_ACCOUNT_ID
and GEOIPUPDATE_LICENSE_KEY
environment variables respectively.Be sure to remove any unwanted spaces after the equal sign or after the variable value.
Our .env
file is now configured and we are ready to deploy our Cloud Run service.
In this final step, we execute our deploy.sh
script. In order to learn what our script does in detail, please refer to Building Geolocation API. From a deployment standpoint, in a fresh and clean cloud shell environment, our script will first prepare our Cloud Shell VM and then deploy our CDKTF stacks one by one.
When I initially started working on the project, I began documenting the steps to write a detailed guide for setting up the Cloud Shell VM to deploy our microservice. However, I realized that most of this work can be easily automated and a simple shell script would be more beneficial and user-friendly for everyone involved.
Consequently, I went ahead and created a shell script that handles the setup and deployment of our foundational CDKTF stacks. We then had to trigger the build manually once before we could deploy our third and final CDKTF stack to deploy our Cloud Run service.
In this version of the script, I was using pyenv
to set up python version globally and there were a couple of manual steps to be taken in the exact order at a particular point in time. This was making it a little difficult to understand and troubleshoot. So, I spent some more time to refactor and make the script better and the result was fully automated end to end deployment of our microservice Geolocation API with just one command.
To execute our script, make sure you are in the project root directory and run ./deploy.sh
script.
# make sure we are in the right directory
cd ~/geolocation
./deploy.sh
If prompted, select Authorize
to continue.
Now, sit back and watch our Cloud Shell VM get set up and our Cloud Run service get built, tested, and deployed on Cloud Run.
This script actually does a lot of work and to understand more, please watch my video about Building Geolocation API where I walk through the code and delve into how various tools work together and how this shell script brings everything together. It should take approximately 12 minutes for setting up VM, building the container image, and then deploying a cloud run revision.
At this stage, we should have a running Cloud Run revision for our geolocation service. You can check the status of our deployed Cloud Run service here. Click on the service link to open the Cloud Run page and access the url the service is hosted on.
Also, check for configured weekly schedule to trigger our Cloud Build here. So, every week, our Cloud Build will get triggered and rebuild the image with updated MaxMind GeoLite2 databases and deploy the new Cloud Run revision.
It took quite a bit of work to reach here, but from now on, automation will take over. Now, we can use this API in any number of applications that needs geolocation information. We do not have to worry about keeping the databases up to date or the scalability of the service. Cloud Run by design will spin up as many containers as required based on the demand and when there is no demand, Cloud Run will terminate all containers and scale the service down to zero. We are billed only for the time our containers are serving requests.
Thank you for reading. I hope you find this useful. I know there is a lot that can be improved. Your feedback and suggestions are very important to me. Please take a moment to leave a comment.