Limits, Concurrency & Cold Start
Concurrency defaults per plan, cold start latency, queue handling, and scale-to-zero billing for Tenki Runners.
This page is the single reference for the operational limits of Tenki Runners, concurrency, cold start, queue handling, and how idle capacity is (not) billed.
Cold start
Tenki Runners cold-start in approximately 15 seconds on average, regardless of runner size or concurrency. There is no pool warm-up step; every job boots into a fresh microVM.
Concurrency defaults
Default concurrent-job limits, by plan:
| Plan | Concurrent jobs |
|---|---|
| Starter | Up to 5 |
| Team | Up to 50 |
| Enterprise | Custom (unlimited on request) |
Higher concurrency is available on request at additional cost. Bursting beyond your default limit causes additional jobs to queue (not fail) until capacity frees up.
Per-runner-family notes
| Runner family | Default concurrency on Team |
|---|---|
| Linux x64 | 50 |
| macOS M4 Pro | 4 (Team default), additional macOS concurrency available as an add-on |
The macOS concurrency add-on lifts the macOS-specific cap; the overall account concurrency cap still applies.
Queue handling
- Jobs that exceed your concurrency limit queue, not fail.
- Time spent in the queue is not billed, billing starts when the VM is assigned and the job begins running.
Scale-to-zero billing
Tenki bills per consumed minute, rounded up to the next second on the last minute. There are no idle charges:
- You don't pay for weekends or overnight when nothing runs.
- You don't pay for a standby pool, there isn't one.
- VMs are destroyed immediately on job completion, so there's no post-job idle time.
The only recurring charge outside per-minute usage is your plan fee (Starter $0/mo, Team $200–$250/mo) and any active add-ons. See Pricing for the full breakdown.
Cache size
GitHub Actions enforces a 10 GB-per-key cache size limit. Tenki does not apply additional storage limits or eviction policies on its side, cache entries remain available until you overwrite or delete them.