Core Concepts
Tidedge Serverless introduces several key concepts that make it possible to run full VMs as serverless functions. Understanding these concepts will help you build and deploy applications effectively.
Stacks & Variants
A Stack is a complete application definition with all its services. Think of stacks like environments: you might have a prod stack and a dev stack. A Variant inherits from a stack and overrides specific services - perfect for PR previews.
Hierarchy
Complete Application
A stack defines all services for an environment. Each stack is independent and runs its own instances.
- prod - always-on, multiple instances
- dev - scale-to-zero, boots on request
- Service definitions (image, tag, dependencies)
- Entrypoint for URL routing
Inherited Override
A variant inherits from a stack but overrides specific services. Non-overridden services are SHARED from the parent stack.
- Inherits from parent stack
- Only overridden services cost resources
- Shared services = zero extra cost
- Perfect for PR preview environments
SHARED vs UNIQUE Services
When you create a variant, only the services you override are UNIQUE (run their own instances). Everything else is SHARED from the parent stack - no extra resources needed.
Result: PR variant pr-42 only costs resources for ONE service (frontend). The database, redis, and backend are shared from the dev stack's running instances.
URL Routing
Services are accessed via generated URLs following this pattern:
| Pattern | Example | Description |
|---|---|---|
| {service}.{stack}.domain | frontend.prod.localhost.openiap.io | Service in stack |
| {service}.{stack}--{variant}.domain | frontend.dev--pr-42.localhost.openiap.io | Service in variant |
| {stack}.domain | prod.localhost.openiap.io | Stack's entrypoint service |
Environment Variables
Variables cascade down and can be overridden at each level. Priority (lowest to highest):
Layers
Tidedge uses a layered image architecture similar to Docker, but optimized for VM boot speed. Each application is built from multiple layers that stack on top of each other.
Layer Types
- Distribution: Base OS (any Linux distro)
- Framework: Runtime (any language/framework)
- Application: Your code and dependencies
Benefits
- Layers are cached and reused across VMs
- Only changed layers need to be rebuilt
- Warm pools can share base layers
- Efficient storage via deduplication
At runtime, layers are mounted as an overlay filesystem. The base layers are read-only, and a writable scratch layer captures any changes. This means VMs are immutable and reproducible - restart a VM and you get the exact same state.
Content Addressable Storage (CAS)
All image layers are stored in a Content Addressable Storage system. This means content is addressed by its SHA256 hash, enabling global deduplication - the same content is never stored twice.
How CAS Works
- Content is hashed with SHA256 to create a unique digest
- Blobs are stored by their content hash, not by name
- If the same content exists anywhere, the existing blob is reused
- Orphaned blobs (no longer referenced) can be cleaned up automatically
Benefits
- Deduplication: Same layer shared across all images that use it
- Fast pulls: Only download layers you don't already have
- Efficient storage: No wasted space on duplicate content
- Integrity: Content verified by hash on every read
Volumes
Volumes provide persistent storage for your VMs. Unlike the overlay filesystem which resets on restart, volume data persists across VM restarts and can be shared between instances.
| Pattern | Access | Use Case |
|---|---|---|
| Exclusive | RWO (Read-Write-Once) | Single database instance |
| Per-Instance | RWO | StatefulSet (Kafka, ZooKeeper) |
| Shared | RWX (Read-Write-Many) | Cache, shared content |
| Read-Only | ROX (Read-Only-Many) | ML models, static assets |
Volume Providers
Volumes can be snapshotted for backups and resized as needed. See the CLI Reference for volume commands.
Warm Pools
Warm pools are pre-started VMs waiting to handle requests. When a request arrives for a compatible service, the scheduler can instantly assign a warm pool VM instead of cold-booting a new one.
How Pool Matching Works
The scheduler checks if the service's layer stack is a superset of the pool's layers. If compatible, the warm VM is converted to a service instance.
Pools are configured with min/max warm counts. The system automatically maintains the pool by starting new VMs when capacity drops. See the CLI Reference for pool commands.
Hypervisors
Tidedge supports multiple hypervisors and automatically selects the best one based on your workload requirements. You request features, and the system picks the right hypervisor.
| Hypervisor | Priority | GPU | Best For |
|---|---|---|---|
| Firecracker | 10 (preferred) | No | Fast serverless, microVMs |
| cloud-hypervisor | 20 | Yes | GPU workloads, modern VMM |
| QEMU | 30 (fallback) | Yes | Full compatibility, emulation |
Selection Rules
- No special requirements → Firecracker (fastest)
- GPU requested → cloud-hypervisor or QEMU
- Architecture emulation → QEMU only
- You can override with --hypervisor flag
Aliases (Custom Domains)
Aliases map custom domains to specific stack/variant/service combinations. Instead of using generated URLs, you can use your own domain names.
How Aliases Work
- Point your DNS to Tidedge load balancers
- Create an alias mapping domain to stack/variant/service
- Requests are routed to the correct service automatically
Example Mapping
| www.myapp.com | → | frontend.myapp--prod |
| api.myapp.com | → | api.myapp--prod |
See the CLI Reference for alias commands.