Worker#
The OpenLambda worker is the core server-side component of a node. It listens for incoming HTTP requests, manages container lifecycle, and returns responses to callers.
Overview#
Each worker is a standalone Go binary that exposes a single HTTP endpoint:
POST /runLambda/<lambda-name>
When a request arrives the worker:
Checks whether the lambda’s container image is already present on the node; if not, pulls it from the registry.
Starts a Linux container from the image.
Passes the request payload to the lambda function running inside the container.
Waits for the function to return a result, then forwards that result back to the caller.
Optionally keeps the container warm for a short period to reduce cold-start latency on subsequent calls.
Configuration#
The worker is configured via a JSON file (default config.json) in the working directory.
Key fields:
Field |
Description |
Default |
|---|---|---|
|
Port the HTTP server listens on |
|
|
URL of the lambda registry |
|
|
Container backend ( |
|
|
Where to write logs ( |
|
Starting the Worker#
# From the repo root after building:
./bin/worker --config config.json
The worker prints its listening address on startup. You can verify it is running with:
curl -w "\n" localhost:8080/status
Deploying Multiple Workers#
Workers are stateless with respect to routing — each one operates independently. To scale horizontally, start one worker process per node and place a standard HTTP load balancer (Nginx, HAProxy, or similar) in front of them. No coordination between workers is required.
Note
A centralized boss component for cluster-wide management is currently under development. Until then, manual deployment behind a load balancer is the recommended approach for multi-node setups.
Further Reading#
Quickstart guide — get a single worker running locally in minutes
SOCK: Rapid Task Provisioning with Serverless-Optimized Containers — the research paper describing the container backend.