Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.
It also natively interfaces with many other parts of the Google Cloud ecosystem, including Cloud SQL for managed databases, Cloud Storage for unified object storage, and Secret Manager for managing secrets.
Django CMS is an enterprise content management system (CMS) built on top of Django. Django is a high-level Python web framework.
In this tutorial, you will use these components to deploy a small Django CMS project.
If you see a "request account button" at the top of the main Codelabs window, click it to obtain a temporary account. Otherwise ask one of the staff for a coupon with username/password.
These temporary accounts have existing projects that are set up with billing so that there are no costs associated for you with running this codelab.
Note that all these accounts will be disabled soon after the codelab is over.
Use these credentials to log into the machine or to open a new Google Cloud Console window console.cloud.google.com. Accept the new account Terms of Service and any updates to Terms of Service.
Here's what you should see once logged in:
When presented with this console landing page, please select the only project available. Alternatively, from the console home page, click on "Select a Project" :
While Google Cloud can be operated remotely from your laptop, in this codelab we will be using Google Cloud Shell, a command line environment running in the Cloud.
If you've never started Cloud Shell before, you'll be presented with an intermediate screen (below the fold) describing what it is. If that's the case, click Continue (and you won't ever see it again). Here's what that one-time screen looks like:
It should only take a few moments to provision and connect to Cloud Shell.
This virtual machine is loaded with all the development tools you'll need. It offers a persistent 5GB home directory and runs in Google Cloud, greatly enhancing network performance and authentication. Much, if not all, of your work in this codelab can be done with simply a browser or your Chromebook.
Once connected to Cloud Shell, you should see that you are already authenticated and that the project is already set to your project ID.
gcloud auth list
Command output
Credentialed Accounts ACTIVE ACCOUNT * <my_account>@<my_domain.com> To set the active account, run: $ gcloud config set account `ACCOUNT`
gcloud config list project
Command output
[core] project = <PROJECT_ID>
If it is not, you can set it with this command:
gcloud config set project <PROJECT_ID>
Command output
Updated property [core/project].
From Cloud Shell, enable the Cloud APIs for the components that will be used:
gcloud services enable \ run.googleapis.com \ sql-component.googleapis.com \ sqladmin.googleapis.com \ compute.googleapis.com \ cloudbuild.googleapis.com \ secretmanager.googleapis.com
You may encounter a dialog where gcloud requests your credentials. This is normal. Authorise the request (will happen once per Cloud Shell session)
This operation may take a few moments to complete.
Once completed, a success message similar to this one should appear:
Operation "operations/acf.cc11852d-40af-47ad-9d59-477a12847c9e" finished successfully.
You'll use the default Django CMS project template as your sample Django CMS project.
To create this template project, use Cloud Shell to create a new directory named djangocms-cloudrun
and navigate to it:
mkdir ~/djangocms-cloudrun cd ~/djangocms-cloudrun
Then, install the Django CMS installer into a temporary virtual environment:
virtualenv venv source venv/bin/activate pip install djangocms-installer
Then, create a new template project in the current folder:
djangocms -s -p . myproject
You'll now have a template Django CMS project in a folder called myproject
:
ls -F
manage.py* media/ myproject/ project.db requirements.txt static/ venv/
You can now exit and remove your temporary virtual environment:
deactivate rm venv/ -rf
From here, Django CMS will be called within the container.
You can also remove the automatically created temporary SQLite database. It will not be used for this codelab:
rm project.db
You'll now create your backing services: a Cloud SQL database, a Cloud Storage bucket, and a number of Secret Manager values.
Securing the values of the passwords used in deployment is important to the security of any project, and ensures that no one accidentally puts passwords where they don't belong (for example, directly in settings files, or typed directly into your terminal where they could be retrieved from history.)
First, set two base environment variables, one for the project ID:
PROJECT_ID=$(gcloud config get-value core/project)
And one for the region:
REGION=us-central1
Now, create a Cloud SQL instance:
gcloud sql instances create myinstance --project $PROJECT_ID \ --database-version POSTGRES_11 --tier db-f1-micro --region $REGION
This operation may take a few minutes to complete.
Then in that instance, create a database:
gcloud sql databases create mydatabase --instance myinstance
Then in that same instance, create a user:
DJPASS="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)" gcloud sql users create djuser --instance myinstance --password $DJPASS
Finally, create a Cloud Storage bucket (noting the name must be globally unique):
GS_BUCKET_NAME=${PROJECT_ID}-media gsutil mb -l ${REGION} gs://${GS_BUCKET_NAME}
Since objects stored in the bucket will have a different origin (a bucket URL rather than a Cloud Run URL), you need to configure the Cross Origin Resource Sharing (CORS) settings.
Create a new file called cors.json
, with the following contents:
touch cors.json cloudshell edit cors.json
[
{
"origin": ["*"],
"responseHeader": ["Content-Type"],
"method": ["GET"],
"maxAgeSeconds": 3600
}
]
Apply this CORS configuration to the newly created storage bucket:
gsutil cors set cors.json gs://$GS_BUCKET_NAME
Having set up the backing services, you'll now store these values in a file protected using Secret Manager.
Secret Manager allows you to store, manage, and access secrets as binary blobs or text strings. It works well for storing configuration information such as database passwords, API keys, or TLS certificates needed by an application at runtime.
First, create a file with the values for the database connection string, media bucket, a secret key for Django (used for cryptographic signing of sessions and tokens), and to enable debugging:
echo DATABASE_URL=\"postgres://djuser:${DJPASS}@//cloudsql/${PROJECT_ID}:${REGION}:myinstance/mydatabase\" > .env echo GS_BUCKET_NAME=\"${GS_BUCKET_NAME}\" >> .env echo SECRET_KEY=\"$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 50 | head -n 1)\" >> .env echo DEBUG=\"True\" >> .env
Then, create a secret called application_settings
, using that file as the secret:
gcloud secrets create application_settings --replication-policy automatic gcloud secrets versions add application_settings --data-file .env
Allow Cloud Run access to access this secret:
export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)') export CLOUDRUN=${PROJECTNUM}-compute@developer.gserviceaccount.com gcloud secrets add-iam-policy-binding application_settings \ --member serviceAccount:${CLOUDRUN} --role roles/secretmanager.secretAccessor
Confirm the secret has been created by listing the secrets:
gcloud secrets versions list application_settings
After confirming the secret has been created, remove the local file:
rm .env
Given the backing services you just created, you'll need to make some changes to the template project to suit. This will include introducing django-environ
to use environment variables as your configuration settings, which you'll seed with the values you defined as secrets. To implement this, you'll extend the template settings.
Find the generated settings.py
file, and rename it to basesettings.py:
mv myproject/settings.py myproject/basesettings.py
Next, use the Cloud Shell web editor to open the file and replace the entire file's contents the the following:
touch myproject/settings.py cloudshell edit myproject/settings.py
# Import the original settings from each template
from .basesettings import *
try:
from .local import *
except ImportError:
pass
# Pulling django-environ settings file, stored in Secret Manager
import environ
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
env_file = os.path.join(BASE_DIR, ".env")
SETTINGS_NAME = "application_settings"
if not os.path.isfile('.env'):
import google.auth
from google.cloud import secretmanager_v1 as sm
_, project = google.auth.default()
if project:
client = sm.SecretManagerServiceClient()
name = f"projects/{project}/secrets/{SETTINGS_NAME}/versions/latest"
payload = client.access_secret_version(name=name).payload.data.decode("UTF-8")
with open(env_file, "w") as f:
f.write(payload)
env = environ.Env()
env.read_env(env_file)
# Setting this value from django-environ
SECRET_KEY = env("SECRET_KEY")
# Could be more explicitly set (see "Improvements")
ALLOWED_HOSTS = ["*"]
# Default false. True allows default landing pages to be visible
DEBUG = env("DEBUG")
# Setting this value from django-environ
DATABASES = {"default": env.db()}
INSTALLED_APPS += ["storages"] # for django-storages
if "myproject" not in INSTALLED_APPS:
INSTALLED_APPS += ["myproject"] # for custom data migration
# Define static storage via django-storages[google]
GS_BUCKET_NAME = env("GS_BUCKET_NAME")
STATICFILES_DIRS = []
DEFAULT_FILE_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
STATICFILES_STORAGE = "storages.backends.gcloud.GoogleCloudStorage"
GS_DEFAULT_ACL = "publicRead"
Take the time to note the commentary added about each configuration.
Locate the requirements.txt
file, and append the following packages:
cloudshell edit requirements.txt
gunicorn==20.0.4 psycopg2-binary==2.8.5 google-cloud-secret-manager==2.0.0 google-auth==1.22.1 django-storages[google]==1.9.1 django-environ==0.4.5
Container Registry is a private container image registry that runs on Google Cloud. You'll use it to store your containerized project.
To containerize the template project, first create a new file named Dockerfile
in the top level of your project (in the same directory as manage.py
), and copy the following content:
To containerize the template project, create a Dockerfile and add the following content:
touch Dockerfile cloudshell edit Dockerfile
# Use an official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8-slim
ENV APP_HOME /app
WORKDIR $APP_HOME
# Install dependencies.
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy local code to the container image.
COPY . .
# Service must listen to $PORT environment variable.
# This default value facilitates local development.
ENV PORT 8080
# Setting this ensures print statements and log messages
# promptly appear in Cloud Logging.
ENV PYTHONUNBUFFERED TRUE
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn --bind 0.0.0.0:$PORT --workers 1 --threads 8 --timeout 0 myproject.wsgi:application
Now, build your container image using Cloud Build, by running the following command from the directory containing the Dockerfile
:
gcloud builds submit --tag gcr.io/$PROJECT_ID/djangocms-cloudrun
Once pushed to the registry, you'll see a SUCCESS
message containing the image name. The image is stored in Container Registry and can be re-used if desired.
You can list all the container images associated with your current project using this command:
gcloud container images list
To create the database schema in your Cloud SQL database and populate your Cloud Storage bucket with your media assets, you need to run migrate
and collectstatic
.
These base Django migration commands need to be run within the context of your built container with access to your database.
You will also need to run createsuperuser
to create an administrator account to log into the Django admin.
For this step, we're going to use Cloud Build to run Django commands, so Cloud Build will need access to the Django configuration stored in Secret Manager.
As earlier, set the IAM policy to explicitly allow the Cloud Build service account access to the secret settings:
export PROJECTNUM=$(gcloud projects describe ${PROJECT_ID} --format 'value(projectNumber)') export CLOUDBUILD=${PROJECTNUM}@cloudbuild.gserviceaccount.com gcloud secrets add-iam-policy-binding application_settings \ --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor
Additionally, allow Cloud Build to connect to Cloud SQL in order to apply the database migrations:
gcloud projects add-iam-policy-binding ${PROJECT_ID} \ --member serviceAccount:${CLOUDBUILD} --role roles/cloudsql.client
To create the superuser, you're going to use a data migration. This migration needs to be created in the migrations
folder under my myproject
.
Firstly, create the base folder structure:
mkdir myproject/migrations touch myproject/migrations/__init__.py
Then, create the new migration, copying the following contents:
touch myproject/migrations/0001_createsuperuser.py cloudshell edit myproject/migrations/0001_createsuperuser.py
from django.db import migrations
import google.auth
from google.cloud import secretmanager_v1 as sm
def createsuperuser(apps, schema_editor):
# Retrieve secret from Secret Manager
_, project = google.auth.default()
client = sm.SecretManagerServiceClient()
name = f"projects/{project}/secrets/admin_password/versions/latest"
admin_password = client.access_secret_version(name=name).payload.data.decode("UTF-8")
# Create a new user using acquired password
from django.contrib.auth.models import User
User.objects.create_superuser("admin", password=admin_password)
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.RunPython(createsuperuser)
]
Now back in the terminal, create the admin_password
as within Secret Manager, and only allow it to be seen by Cloud Build:
gcloud secrets create admin_password --replication-policy automatic admin_password="$(cat /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | fold -w 30 | head -n 1)" echo -n "${admin_password}" | gcloud secrets versions add admin_password --data-file=- gcloud secrets add-iam-policy-binding admin_password \ --member serviceAccount:${CLOUDBUILD} --role roles/secretmanager.secretAccessor
Next, create the following Cloud Build configuration file:
touch cloudmigrate.yaml cloudshell edit cloudmigrate.yaml
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/${PROJECT_ID}/djangocms-cloudrun", "."]
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/${PROJECT_ID}/djangocms-cloudrun"]
- name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "gcr.io/$PROJECT_ID/djangocms-cloudrun",
"-s", "${PROJECT_ID}:${_REGION}:myinstance",
"--", "python", "manage.py", "migrate"]
- name: "gcr.io/google-appengine/exec-wrapper"
args: ["-i", "gcr.io/$PROJECT_ID/djangocms-cloudrun",
"-s", "${PROJECT_ID}:${_REGION}:myinstance",
"--", "python", "manage.py", "collectstatic", "--no-input"]
Finally, run all the initial migrations through Cloud Build:
gcloud builds submit --config cloudmigrate.yaml \ --substitutions _REGION=$REGION
With the backing services created and populated, you can now create the Cloud Run service to access them.
The initial deployment of your containerized application to Cloud Run is created using the following command:
gcloud run deploy djangocms-cloudrun --platform managed --region $REGION \ --image gcr.io/$PROJECT_ID/djangocms-cloudrun \ --add-cloudsql-instances ${PROJECT_ID}:${REGION}:myinstance \ --allow-unauthenticated
Wait a few moments until the deployment is complete. On success, the command line displays the service URL:
Service [djangocms-cloudrun] revision [djangocms-cloudrun-00001-...] has been deployed and is serving 100 percent of traffic. Service URL: https://djangocms-cloudrun-...-uc.a.run.app
You can also retrieve the service URL with this command:
gcloud run services describe djangocms-cloudrun \ --platform managed \ --region $REGION \ --format "value(status.url)"
You can now visit your deployed container by opening this URL in a web browser:
Because this is a new installation, you will be automatically redirected to the login page. Log in with the username "admin" and the admin password, which you can retrieve using the following command:
gcloud secrets versions access latest --secret admin_password && echo ""
If you want to make any changes to your Django CMS project, you'll need to build your image again:
gcloud builds submit --tag gcr.io/$PROJECT_ID/djangocms-cloudrun
Should your change include static or database alterations, be sure to run your migrations as well:
gcloud builds submit --config cloudmigrate.yaml \ --substitutions _REGION=$REGION
Finally, re-deploy:
gcloud run deploy djangocms-cloudrun --platform managed --region $REGION \ --image gcr.io/$PROJECT_ID/djangocms-cloudrun
You have just deployed a complex project to Cloud Run!
To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial: