This post explains how the hardware footprint numbers shown on the landing page were measured (RAM usage, download size, and on-disk footprint), what the metrics mean, and how to interpret them. It also includes the full benchmark results, including endpoint performance, for readers who want the detailed numbers.
Benchmark setup
- One dedicated EC2 instance per run. No other workloads on the machine.
- Reference hardware: AWS EC2 c6g.xlarge (Graviton2) as the baseline CPU for reported numbers.
- Two-phase run:
- Warm-up: 30 seconds (not counted)
- Measured: 600 seconds (this is what is reported)
The warm-up exists to avoid "first request" artifacts (cold caches, initial file reads, first route graph touches) and to make region-to-region comparisons fairer.
What the benchmark simulates
The benchmark simulates a single interactive user doing common map actions:
- loading vector tiles while panning/zooming
- occasional forward search (geocoding)
- occasional reverse geocoding (tap/click)
- occasional routing (point-to-point)
It is intentionally not a bulk throughput test. The intent is to answer: "Will it feel responsive for an interactive UI, and what is the resource footprint?"
Note on p95: throughout this post and in the results below, p95 (95th percentile) means "95% of requests were faster than this value." It’s a useful way to describe typical worst-case behavior without being dominated by rare outliers.
Raw benchmark results (by region)
| region | disk_tiles_gb | disk_router_gb | disk_geocoder_gb | disk_data_package_total_gb | disk_docker_gb | net_data_package_gb | net_docker_gb | ram_total_p95_gb | tile_p95_ms | geocode_p95_ms | reverse_p95_ms | route_p95_ms | tile_total | geocode_total | reverse_total | route_total |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| berlin | 0.08 | 0.15 | 0.06 | 0.29 | 1.34 | 0.16 | 0.33 | 0.18 | 8.66 | 40.64 | 5.31 | 107.59 | 35360 | 807 | 452 | 527 |
| vienna | 0.04 | 0.05 | 0.03 | 0.12 | 1.34 | 0.07 | 0.33 | 0.14 | 9.67 | 10.42 | 6.36 | 59.93 | 36000 | 768 | 455 | 545 |
| dc | 0.01 | 0.03 | 0.01 | 0.05 | 1.34 | 0.03 | 0.33 | 0.14 | 10.43 | 13.50 | 5.42 | 74.97 | 35700 | 810 | 441 | 519 |
| bavaria | 1.05 | 0.97 | 0.31 | 2.34 | 1.34 | 1.54 | 0.33 | 0.18 | 7.05 | 40.29 | 5.62 | 86.19 | 36220 | 771 | 449 | 543 |
| austria | 1.09 | 0.79 | 0.28 | 2.17 | 1.34 | 1.52 | 0.33 | 0.21 | 6.61 | 14.77 | 5.16 | 93.45 | 36480 | 801 | 456 | 544 |
| belgium | 0.79 | 0.44 | 0.22 | 1.45 | 1.34 | 1.05 | 0.33 | 0.17 | 6.97 | 21.69 | 5.76 | 58.85 | 37100 | 837 | 456 | 594 |
| colorado | 0.56 | 0.46 | 0.10 | 1.14 | 1.34 | 0.76 | 0.33 | 0.24 | 4.33 | 19.77 | 4.26 | 105.00 | 36900 | 789 | 466 | 535 |
| poland | 3.07 | 2.17 | 0.48 | 5.72 | 1.34 | 4.03 | 0.33 | 1.10 | 7.40 | 117.35 | 53.94 | 144.55 | 34900 | 813 | 426 | 529 |
| spain | 2.26 | 2.04 | 1.14 | 5.45 | 1.34 | 3.49 | 0.33 | 0.47 | 5.26 | 306.74 | 70.93 | 148.71 | 33600 | 810 | 410 | 477 |
| texas | 1.06 | 1.53 | 0.32 | 2.92 | 1.34 | 1.69 | 0.33 | 0.22 | 3.95 | 40.87 | 4.56 | 124.73 | 36340 | 834 | 475 | 557 |
| germany | 5.77 | 4.97 | 1.81 | 12.56 | 1.34 | 8.34 | 0.33 | 2.22 | 8.98 | 195.15 | 62.19 | 222.79 | 33180 | 747 | 381 | 491 |
| uk | 2.82 | 2.56 | 1.26 | 6.64 | 1.34 | 4.26 | 0.33 | 1.26 | 4.12 | 541.64 | 205.05 | 265.85 | 31880 | 792 | 382 | 468 |
| california | 1.43 | 1.60 | 0.38 | 3.43 | 1.34 | 2.17 | 0.33 | 0.37 | 4.26 | 164.35 | 19.53 | 215.47 | 35340 | 759 | 415 | 527 |
| us_west | 5.14 | 4.53 | 1.03 | 10.71 | 1.34 | 7.05 | 0.33 | 2.52 | 4.27 | 452.50 | 26.32 | 338.66 | 32480 | 681 | 394 | 510 |
How to read these results
Download size and on-disk footprint
- net_data_package_gb: how much region data you download (compressed bundle).
- disk_data_package_total_gb: how much that region data occupies on disk once installed, broken down into:
- disk_tiles_gb (vector tiles)
- disk_router_gb (routing data)
- disk_geocoder_gb (geocoding DB)
- net_docker_gb: network download size for the Docker images used by the stack.
- disk_docker_gb: on-disk size of the Docker images (after pulling).
Note on Docker: Docker image sizes are the same regardless of the region and are a one-off cost per device (you download/pull them once, then only the region data changes).
RAM usage
- ram_total_p95_gb: p95 RAM usage of the stack during the measured run.
This is meant to represent RAM used by Corviont services, not a full device sizing recommendation. Your device OS, filesystem cache, logging, and any other workloads will add overhead on top
Endpoint performance (p95)
- tile_p95_ms
- geocode_p95_ms
- reverse_p95_ms
- route_p95_ms
These are p95 request durations during the measured run on the reference EC2 hardware. The raw results table also includes total request counts for each endpoint type (tile_total, geocode_total, reverse_total, route_total) so you can see how much traffic these p95 values were derived from.
How to use these numbers for your hardware
If you are primarily asking "Will this fit?", focus on:
- ram_total_p95_gb
- disk_data_package_total_gb
- disk_docker_gb
plus your own OS/headroom.
If you are primarily asking "Will this feel fast enough?", focus on:
- tile_p95_ms (UI smoothness while panning)
- route_p95_ms (routing responsiveness)
- geocode_p95_ms / reverse_p95_ms (search feel)
If your CPU is slower/faster per core than Graviton2, treat these timings as a baseline. Slower per-core CPUs will generally increase compute-heavy endpoints (routing/geocoding) more than tile serving.
Notes and limits
- These results reflect a single-user interactive workload on a dedicated machine.
- For multiple concurrent users or background workloads, plan additional headroom (cores and RAM) depending on your concurrency and latency targets.
- Warm-up is discarded; only the 600-second measured run is published to keep comparisons consistent.