Python is an excellent tool for application development. It offers a diverse field of use cases and capabilities, from machine learning to big data analysis. This versatility has allowed Python to carve a real niche for itself in the computing world. And now, as DevOps becomes more and more cloud-based, Python is also making its way into cloud computing as well.
However, that’s not to say that running Python can’t come with its own set of challenges. For example, applications that perform even the simplest tasks need to run 24/7 for users to get the most out of their capabilities, but this can take up a lot of bandwidth—literally.
Python can run numerous local and web applications, and it’s become one of the most common for scripting automation to synchronize and manipulate data in the cloud. DevOps, operations, and developers use Python as a preferred language, mainly for its many open-source libraries and add-ons. It’s also the second most common language used on GitHub repositories.
Today we’re talking about running Python scripts on Google Cloud and deploying a basic Python application to Kubernetes.
Requirements for Running Python Script on Google Cloud
Before you can work with Python in Google Cloud, you need to set up your environment. After that, you can code for the cloud using your local device, but you must install the Python interpreter and the SDK. The complete list of requirements includes:
Install the latest version of Python
Use venv to isolate dependencies.
Install your favorite Python editor. For example, PyCharm is very popular.
Install the Google Cloud SDK.
Install any third-party libraries that you prefer.
What Runs Python on Google Cloud?
Businesses all over the world can benefit from cloud hosting. Both cloud-native and hybrid structures have technological benefits like data warehouse modernization and levels of security compliance that help fortify the development process. But running code on Google Cloud requires a proper setup and a migration strategy, specifically a Kubernetes migration strategy, if you intend to orchestrate containerization.
Generally speaking, however, any code deployed in Google Cloud is run by a virtual machine (VM). Kubernetes, Docker, and even Anthos make application modernization possible for large applications. In the case of smaller scripts and deployments, a customizable VM instance is adequate for running Python script on Google Cloud and determining processor size, the amount of RAM, and even the operating system of choice for running applications.
Google Container Registry and Code Migration
To begin scheduling Python scripts on Google Cloud, teams must first migrate their code to the VM instance. Many experts recommend doing so through Google Container Registry for storing Docker images and the Dockerfile.
First, you must enable the Google Container Registry. The Container Registry requires billing set up on your project, which can be confirmed on your dashboard. Since you already have the Cloud SDK installed, use the following gcloud command to enable the registry:
gcloud services enable containerregistry.googleapis.com
If you have images from third-party images, Google provides step-by-step instructions with a sample script that will migrate to the Registry. You can do this for any Docker image that you store on third-party services, but you may want to create new projects in Python that will be stored in the cloud.
Creating a Python Container Image
After you create a Python script, you can create an image for it. A Docker image is a text file that contains the commands to build, configure, and run the application. The following Docker example shows you the content of a Dockerfile used to build and image:
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"]
After you create the image, you can now build it. Use the following command to build it:
$ docker build --tag python-docker
The --tag option tells Docker what to name the image. You can read more about creating and building Docker images here.
After the image is created, you can move it to the cloud. You must have a project set up in your Google Cloud Platform dashboard and be authenticated before migrating the container. The following command migrates the image to Google Cloud Platform:
gcloud build submit
The above basic commands will migrate a sample Python image, but full instructions can be found in the Google Cloud Platform documentation.
Initiating the Docker Push to Create a Google Cloud Run Python Script
Once the Dockerfile has been uploaded to the Google Container Registry and the Python image has been created, it’s time to initiate the Docker push command to finish the deployment and prepare the storage files. A Google Cloud run Python script requires creating two storage files before a developer can claim the Kubernetes cluster and deploy it to Kubernetes.
The Google Cloud Run platform has an interface to deploy the script and run it in the cloud. Open with the Cloud Run interface, click “Create Service” from the menu and configure your service. Next, select the container pushed to the cloud platform and click “Create” when you finish the setup.
Deploying the Application to Kubernetes
The final step to schedule a Python script on Google Cloud is to create the service file and the deployment file. Kubernetes is common in automating Docker images and deploying them to the cloud. Orchestration tools use a language called YML to set up configurations and instructions that will be used to deploy and run the application.
Once the appropriate files have been created, it’s time to use kubectl to initiate the last stage of the final stage to run Python on Google Cloud. Kubectl is a command-line tool that makes running commands against Kubernetes like deployments, inspections, and log visibility. It’s an integral step to ensure the Google Cloud run Python script runs efficiently in Kubernetes and the last leg of the migration process.
To deploy a YML file to Kubernetes, run the following command:
$ kubectl create -f example.yml
You can verify that your files deployed by running the following command:
$ kubectl get services