Work in Progress: This page is under development. Use the feedback button on the bottom right to help us improve it.

GCP GKE

Deploy Laminar on Google Kubernetes Engine (GKE).

Prerequisites

  • gcloud CLI configured
  • kubectl installed
  • Helm 3.12+

This document describes what you need to prepare in GCP and GKE to deploy Laminar using Helm.


1. Create GKE Cluster with workload identity enabled.

Laminar requires permissions to access GCP resources such as GCS buckets, BigLake etc. We therefore recommend using Workload Identity Federation to enable Laminar workloads to securely connect to these resources.

gcloud container clusters create laminar \
    --zone <ZONE> \
    --num-nodes 3 \
    --machine-type e2-medium \
    --project <PROJECT_ID> \
    --workload-pool=<PROJECT_ID>.svc.id.goog

Get credentials for the cluster:

gcloud container clusters get-credentials laminar \
    --zone <ZONE> \
    --project <PROJECT_ID>

2. Prepare Storage for Laminar Data and Checkpoints

Laminar requires object storage like a GCS bucket to store job data such as artifacts and checkpoints. Artifacts are compiled build outputs from the Laminar compiler, stored in shared storage for cluster-wide use. Checkpoints are periodic snapshots of an application's state saved in durable storage to enable fault tolerance and recovery.

Create a GCS bucket to store the Laminar artifacts and checkpoints:

gcloud storage buckets create gs://<YOUR_BUCKET_NAME> \
    --location=<LOCATION> \
    --default-storage-class=STANDARD \
    --uniform-bucket-level-access

3. Create and Configure a GCP Service Account for Laminar

A GCP service account is required for Laminar to access GCP resources. Laminar’s Kubernetes service account is mapped to this GCP service account, allowing Laminar to authenticate and access GCP resources. All required permissions are granted to this GCP service account.

a. Create a Service Account

Create a service account dedicated to Laminar:

gcloud iam service-accounts create <YOUR_SERVICE_ACCOUNT_NAME> \
    --project=<PROJECT_ID> \
    --display-name="GKE Workload Identity SA for Laminar"

b. Grant Permissions on the GCS Bucket

gcloud storage buckets add-iam-policy-binding gs://<YOUR_BUCKET_NAME> \
    --member="serviceAccount:<YOUR_SERVICE_ACCOUNT_EMAIL>" \
    --role="roles/storage.objectUser"

4. Set Up GKE Namespace and Image Pull Secret

Laminar’s production images are stored in a private Google Artifact Registry. To allow GKE to pull these images, a Kubernetes secret must be created in the target namespace using the service account JSON file shared with you. This secret is then referenced in the deployment manifest as an imagePullSecret.

a. Create a Kubernetes Namespace

Create a Kubernetes namespace where Laminar will run:

kubectl create namespace <NAMESPACE>

b. Create a Docker Registry Secret

Create a Docker registry secret in that namespace so the cluster can pull Laminar images from the Docker registry:

kubectl create secret docker-registry registry-key \
    --docker-server=us-docker.pkg.dev \
    --docker-username=_json_key \
    --docker-password="$(cat <PATH_TO_SERVICE_ACCOUNT_JSON_KEY_FILE>)" \
    --namespace <NAMESPACE>

5. Configure Workload Identity for Laminar Pods

Next, bind the GCP service account to the Kubernetes service account that will be created as part of the Laminar Helm installation. This binding is required to enable Workload Identity Federation.

gcloud iam service-accounts add-iam-policy-binding <YOUR_SERVICE_ACCOUNT_EMAIL> \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:<PROJECT_ID>.svc.id.goog[<NAMESPACE>/laminar]"

6. Authentication Configuration

Laminar supports only Google Authentication. It can run in dev mode with authentication disabled, but use this only in local or testing environments, not in production.

a. Google OAuth Configuration

You need to configure the following fields:

FieldDescription
secretNextAuth secret used to encrypt tokens. Generate with: openssl rand -base64 32
googleClientIdOAuth 2.0 Client ID from Google Cloud Console
googleClientSecretOAuth 2.0 Client Secret from Google Cloud Console

Setup Steps:

  1. Generate NextAuth Secret:
openssl rand -base64 32
  1. Configure Google OAuth Client: Follow the official Google Cloud documentation to create OAuth credentials: https://support.google.com/cloud/answer/15549257?hl=en

7. Ingress Configuration

Laminar supports three ingress controller types to accommodate different Kubernetes environments and cloud providers:

  • nginx — The widely-used NGINX Ingress Controller, suitable for most on-premises and cloud deployments.
  • alb — AWS Application Load Balancer, ideal for deployments running on Amazon EKS.
  • gce — Google Cloud Load Balancer, designed for deployments on Google Kubernetes Engine (GKE).

Choose the ingress type that aligns with your infrastructure and configure the corresponding settings in the ingress section of your values file.


8. Customize Helm Values for Laminar

Create a custom values file to override defaults:

# GKE-specific values for laminar-core
 
# Web UI application
console:
  authUrl: "<laminar-url>"
  # Enable development mode (disables auth, enables debug features)
  devMode: false
 
  # Google OAuth authentication
  auth:
    # NextAuth secret (generate with: openssl rand -base64 32)
    secret: "<your-generated-secret>"
    # Google OAuth client ID
    googleClientId: "<your-google-client-id>"
    # Google OAuth client secret
    googleClientSecret: "<your-google-client-secret>"
  image:
    repository: us-docker.pkg.dev/e6data-analytics/e6-platform/laminar-console
    tag: "1.0.20"
  imagePullSecrets:
    - name: registry-key
 
# Ingress for HTTP(S) access
ingress:
  enabled: true
  # GKE native ingress class (use 'gce' for Google Cloud Load Balancer)
  className: gce
  hosts:
    - <laminar-url>
 
# Stream processing engine
engine:
  enabled: true
 
  # GCS URLs - update with your bucket paths
  artifactUrl: "gs://<YOUR_BUCKET_NAME>/artifacts"
  checkpointUrl: "gs://<YOUR_BUCKET_NAME>/checkpoints"
 
  image:
    repository: "us-docker.pkg.dev/e6data-analytics/e6-platform/laminar-engine"
    tag: "1.0.116"
  imagePullSecrets:
    - name: registry-key
 
  postgresql:
    deploy: true
 
  # ServiceAccount configuration
  serviceAccount:
    annotations:
      iam.gke.io/gcp-service-account: "<YOUR_SERVICE_ACCOUNT_EMAIL>"
    name: "laminar"
 
postgresql:
  fullnameOverride: laminar-laminar-core-postgresql
  primary:
    persistence:
      enabled: true
      size: 20Gi
      storageClass: standard-rwo

9. Deploy Laminar via Helm

Run Helm install/upgrade in the chosen namespace with your custom values file:

helm upgrade --namespace laminar --install laminar ./ \
    --wait --timeout 2m --values <YOUR_CUSTOM_VALUES_FILE>

Verify Installation

# Check pods
kubectl get pods -n laminar
 
# Get ingress info
kubectl get ingress -n laminar
 
curl -H "Host: <laminar-url>" http://<IP_ADDRESS_OF_INGRESS>/api/v1/pipelines