Helm
Membrane can be deployed using Helm charts for easier management and configuration of your Kubernetes resources.
Registry Access
You'll need credentials from our support team to access Membrane artifacts:
Registry Credentials
- Username format:
robot$<your-company-name> - Access to:
harbor.integration.app
Setting Up Registry Access
- Login to Helm registry
helm registry login harbor.integration.app \
--username <registry-username> \
--password <registry-password>
- Pull and unpack the Integration.app Helm chart:
# See Versions section at the bottom of this page for available versions
helm pull oci://harbor.integration.app/helm/integration-app --version <version> --untar
Prerequisites
Before installing Membrane using Helm, ensure you have the following components set up:
Prometheus Stack
The kube-prometheus stack provides Prometheus, Grafana dashboards, and necessary Prometheus rules:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace monitoring
helm install prometheus-stack prometheus-community/kube-prometheus-stack --namespace monitoring
For advanced configuration options, refer to the kube-prometheus stack documentation.
KEDA
If you plan to use autoscaling features, install KEDA:
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
For advanced KEDA configuration, consult the official KEDA documentation.
Installation
- Configure Container Registry Access
Create a Docker registry secret using your container registry credentials:
kubectl create secret docker-registry integration-app-harbor \
--namespace <your-namespace> \
--docker-server=harbor.integration.app \
--docker-username=<registry-username> \
--docker-password=<registry-password>
- Prepare Configuration
Populate or provide override to values.yaml file with the values for your setup:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/<YOUR_ROLE_NAME>
config:
NODE_ENV: production
MONGO_URI: MONGO_URI
REDIS_URI: REDIS_URI
# URI where `api` service will be available
API_URI: API_URI
# URI where `ui` service will be available
UI_URI: UI_URI
# URI where `console` service will be available
CONSOLE_URI: CONSOLE_URI
# Bucket for storing custom connectors
CONNECTORS_S3_BUCKET: CONNECTORS_S3_BUCKET
# Bucket for storing temporary files (like logs)
TMP_S3_BUCKET: TMP_S3_BUCKET
# Buckets for storing static files that should be available in user's browser (like images)
STATIC_S3_BUCKET: STATIC_S3_BUCKET
# Base URI where files stored in STATIC_S3_BUCKET will be available
BASE_STATIC_URI: BASE_STATIC_URI
# Auth0 Settings
AUTH0_DOMAIN: AUTH0_DOMAIN
AUTH0_CLIENT_ID: AUTH0_CLIENT_ID
AUTH0_CLIENT_SECRET: AUTH0_CLIENT_SECRET
# Secret key used for signing JWT tokens
SECRET: SECRET
# Secret key used for encrypting data at rest
ENCRYPTION_SECRET: ENCRYPTION_SECRET- Validate Chart
Before deploying, make sure that chart is rendering correctly:
helm template integration-app ./path-to-your-chart --namespace <your-namespace>
- Select Cluster Context
Make sure to switch to desired cluster context:
kubectl config use-context <your-desired-context>
- Deploy
Install the chart to cluster:
helm install integration-app ./path-to-this-folder --namespace <your-namespace> --create-namespace
To update an existing installation:
helm upgrade integration-app ./path-to-this-folder --namespace <your-namespace>
Autoscaling Configuration
The following components support autoscaling:
- API
- Instant Tasks Worker
- Queued Tasks Worker
- Custom Code Runner
Each component that supports autoscaling accepts these parameters:
| Parameter | Type | Description |
|---|---|---|
.autoscaling.enabled | Boolean | Enables/disables autoscaling for the component. If autoscaling is a number of replicas will taken from.replicas property. IF autoscaling is enabled, .replicas is ignored. |
.autoscaling.minReplicaCount | Number | Minimum number of replicas |
.autoscaling.maxReplicaCount | Number | Maximum number of replicas |
.autoscaling.cooldownPeriod | Number | Cooldown period between scaling operations |
.autoscaling.pollingInterval | Number | How often to check scaling metrics |
These properties are part of KEDA's core functionality. For more detailed information, please refer to the official KEDA documentation.
Component-Specific Scaling
Each component has specific scaling parameters that control its autoscaling behavior:
| Parameter | Type | Default | Description |
|---|---|---|---|
api.autoscaling. scalingTargets. cpuUtilizationPercent | Number | 50 | Defines the target CPU utilization percentage. Adjusting this value will influence how aggressively the API scales in response to CPU load |
instantTasksWorker. autoscaling. scalingTargets. utilizationRate | Number | 0.75 | Defines the expected percentage of time (0.0-1.0) that workers should be actively processing tasks. Higher values minimize worker idle time but can cause processing delays during high load periods |
customCodeRunner. autoscaling. scalingTargets. capacityRate | Number | 0.45 | Defines the capacity rate of available to total slots. A higher value increases the likelihood of custom code execution waiting for a slot, potentially slowing down API requests. A lower value ensures that custom code requests are processed promptly, but it may result in a higher number of idle pods. |
queuedTasksWorker. autoscaling. scalingTargets. utilizationRate | Number | 0.85 | Defines the expected percentage of time (0.0-1.0) that workers should be actively processing tasks. Higher values minimize worker idle time but can cause processing delays during high load periods |
Versions and Changelog
Latest Version: 0.2.1
Version | Release Date | Changes |
|---|---|---|
0.2.4 | 2025-12-04 | Simplified autoscaling for instant-tasks-worker and queued-tasks-worker to use utilizationRate-based metrics with Prometheus queries. Added Prometheus scraping annotations and service for instant-tasks-worker. |
0.2.3 | 2025-10-21 | Optimized graceful shutdown timing and improved health check reliability for better pod lifecycle management |
0.2.2 | 2025-10-09 | Added startup probe configuration with 5s period and 60s failure threshold across all services for improved container initialization handling. Updated deployment strategy with maxSurge: 50% and maxUnavailable: 25% for better control during rolling updates. |
0.2.1 | 2025-06-01 | Initial release of the Helm chart Support for all core services KEDA autoscaling configuration Prometheus metrics integration |
Updated 22 days ago
