How to deploy OpenOps on a GCP Compute Engine instance
This guide is for testing and evaluation purposes only and is not intended for production deployments. Please reach out to us at support@openops.com if you’d like to learn how to set up OpenOps in a production environment.
This guide explains how to install the OpenOps Docker Compose release on a newly created GCP Compute Engine VM instance.It assumes you have appropriate permissions on an existing Google Cloud Platform (GCP) project.
In the Google Cloud Console, navigate to Compute Engine → VM instances.
Click Create Instance.
Under Machine configuration, configure the following:
Name: Enter a name for your virtual machine.
Region: Choose a region close to your users (e.g., us-east1).
Zone: Choose a zone close to your users (e.g., us-east1-b), or leave the default to let Google choose one for you.
Series: Choose any recommended series (e.g., E2) or another family you prefer.
Machine type: Choose a machine size similar to e2-standard-2 or larger. Avoid very small machines, as OpenOps may need additional CPU/RAM for running Docker containers smoothly.
Under OS and storage:
Click Change.
Select Ubuntu as the operating system (Ubuntu 24.04 LTS if available, or a close alternative).
Increase Size to at least 50GB to accommodate Docker images and databases.
Click Select.
Under Networking, select Allow HTTP traffic. This will automatically create a firewall rule to open port 80.
Keep defaults for other settings (or adjust according to your preferences), then click Create to launch the VM.
Depending on your project settings, a default firewall rule might already allow SSH access. By checking Allow HTTP traffic while creating the instance, HTTP (port 80) is also open. If you need to adjust or review these rules:
In the VPC network menu, go to Firewall.
Locate or create firewall rules for:
SSH (TCP/22) from your IP or a limited source range.
HTTP (TCP/80) for public access or your desired source range.
Open the .env file in the OpenOps installation folder. Change the values of the following variables that represent credentials. Do it now, as you won’t be able to change these values after the initial deployment:
OPS_OPENOPS_ADMIN_EMAIL: the email of your OpenOps installation’s root admin account.
OPS_OPENOPS_ADMIN_PASSWORD: the password of your OpenOps installation’s root admin account.
OPS_POSTGRES_USERNAME: the username of the Postgres database that OpenOps uses.
OPS_POSTGRES_PASSWORD: the password of the Postgres database that OpenOps uses.
OPS_ANALYTICS_ADMIN_PASSWORD: the password of the OpenOps Analytics admin account (the username is hardcoded to admin).
By default, OpenOps does not require any inbound ports other than the application port 80.The following ports are used by bundled services and, in most cases, should not be exposed:
5432: The PostgreSQL database used by OpenOps. Expose this port only if you need direct database access, such as for connecting OpenOps to external analytics tools. Restrict access to a VPN or a trusted IP range.
6379: OpenOps’ internal Redis service. Expose this port only if required for debugging or monitoring purposes. Restrict access to a VPN or a trusted IP range.
To use external PostgreSQL or Redis databases, modify the relevant variables in the .env file. You can disable the corresponding containers by adding a profile in the docker-compose.yml file:
Copy
Ask AI
services: postgres: profiles: ["db"]
If you remove or disable the db profile in .env or in Docker Compose, that container won’t start.After making any changes to the .env file, restart the OpenOps containers:
For production usage, it’s recommended to enable TLS (HTTPS). In addition to the security aspect, this also ensures that workflow templates load properly in all browsers.
The easiest way to enable TLS is to use an OpenOps script that requests and sets up a TLS certificate from Let’s Encrypt. Before running the script, make sure you have a domain name that points to your ‘s external IP address. If you’re configuring DNS right before running the script, you may need to wait for the DNS change to propagate.
Run the following command in your terminal:
Copy
Ask AI
curl -fsS https://openops.sh/tls | sh
When prompted, enter a domain name that points to the external IP address of your .
When prompted, enter an email address to receive certificate-related notifications from Let’s Encrypt.
The script will use the Certbot library to request a certificate for your domain from Let’s Encrypt. It receives and saves the certificate, updates the OpenOps configuration file accordingly, and restarts OpenOps.By default, the certificate expires in 3 months. See https://certbot.org/renewal-setup if you want to configure auto-renewal.
Alternatively, you can create a TLS certificate yourself. This lets you use DNS validation from Let’s Encrypt (rather than the HTTP validation the automatic script performs) or request a certificate from a different provider.To set up TLS manually:
Obtain certificate and private key files from your certificate provider.
Upload the certificate files to your OpenOps installation under the tls directory:
By default, OpenOps does not allow workflows to call internal network addresses such as 127.0.0.1 or 192.168.0.0. This affects HTTP and SMTP actions, as well as webhook triggers. Host validation protects users from creating workflows that could accidentally or maliciously access internal services, scan networks, or escalate privileges.You may need to disable this check in certain circumstances, such as in non-production deployments or when workflows intentionally interact with internal-only infrastructure.To disable host validation, open the .env file in your installation folder and set the OPS_ENABLE_HOST_VALIDATION environment variable to false.After making any changes to the .env file, restart the OpenOps containers:
Copy
Ask AI
sudo docker compose downsudo docker compose up -d
Disabling this check removes an important safety guard and may allow workflows to access internal infrastructure. Use caution and avoid disabling it in production.