This post explains how the hardware footprint numbers shown on the landing page were measured (RAM usage, download size, and on-disk footprint), what the metrics mean, and how to interpret them. It also includes the full benchmark results, including endpoint performance, for readers who want the detailed numbers.
Benchmark setup
- One dedicated EC2 instance per run. No other workloads on the machine.
- Reference hardware: AWS EC2 c6g.xlarge (Graviton2) as the baseline CPU for reported numbers.
- Two-phase run:
- Warm-up: 30 seconds (not counted)
- Measured: 600 seconds (this is what is reported)
The warm-up exists to avoid "first request" artifacts (cold caches, initial file reads, first route graph touches) and to make region-to-region comparisons fairer.
What the benchmark simulates
The benchmark simulates a single interactive user doing common map actions:
- loading vector tiles while panning/zooming
- occasional forward search (geocoding)
- occasional reverse geocoding (tap/click)
- occasional routing (point-to-point)
It is intentionally not a bulk throughput test. The intent is to answer: "Will it feel responsive for an interactive UI, and what is the resource footprint?"
Note on p95: throughout this post and in the results below, p95 (95th percentile) means "95% of requests were faster than this value." It’s a useful way to describe typical worst-case behavior without being dominated by rare outliers.
Raw benchmark results (by region)
| region | disk_tiles_gb | disk_router_gb | disk_geocoder_gb | disk_data_package_total_gb | disk_docker_gb | net_data_package_gb | net_docker_gb | ram_total_p95_gb | tile_p95_ms | geocode_p95_ms | reverse_p95_ms | route_p95_ms | tile_total | geocode_total | reverse_total | route_total |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| berlin | 0.08 | 0.15 | 0.03 | 0.27 | 1.34 | 0.15 | 0.34 | 0.16 | 8.69 | 68.11 | 6.59 | 107.53 | 35200 | 768 | 424 | 548 |
| vienna | 0.04 | 0.05 | 0.01 | 0.11 | 1.34 | 0.07 | 0.34 | 0.14 | 9.66 | 12.57 | 6.84 | 57.27 | 36700 | 846 | 447 | 531 |
| dc | 0.01 | 0.03 | 0.01 | 0.05 | 1.34 | 0.02 | 0.34 | 0.14 | 10.47 | 19.07 | 6.31 | 78.08 | 35620 | 795 | 427 | 541 |
| bavaria | 1.05 | 0.97 | 0.17 | 2.20 | 1.34 | 1.49 | 0.34 | 0.21 | 6.73 | 60.52 | 5.78 | 101.52 | 36200 | 831 | 413 | 524 |
| austria | 1.09 | 0.78 | 0.16 | 2.04 | 1.34 | 1.48 | 0.34 | 0.22 | 6.48 | 36.35 | 5.88 | 87.63 | 36320 | 876 | 448 | 540 |
| belgium | 0.79 | 0.44 | 0.12 | 1.35 | 1.34 | 1.01 | 0.34 | 0.16 | 6.90 | 37.83 | 5.90 | 59.84 | 36860 | 867 | 460 | 591 |
| colorado | 0.56 | 0.46 | 0.06 | 1.09 | 1.34 | 0.74 | 0.34 | 0.21 | 4.23 | 35.48 | 5.32 | 82.97 | 36880 | 882 | 439 | 521 |
| poland | 3.06 | 2.17 | 0.27 | 5.51 | 1.34 | 3.95 | 0.34 | 1.00 | 7.35 | 169.32 | 36.86 | 141.47 | 35080 | 768 | 423 | 537 |
| spain | 2.25 | 2.03 | 0.67 | 4.97 | 1.34 | 3.33 | 0.34 | 0.54 | 5.10 | 311.12 | 20.49 | 132.19 | 33140 | 816 | 452 | 506 |
| texas | 1.06 | 1.53 | 0.18 | 2.78 | 1.34 | 1.64 | 0.34 | 0.21 | 3.72 | 115.24 | 4.94 | 100.96 | 36480 | 843 | 460 | 536 |
| germany | 5.76 | 4.97 | 1.02 | 11.76 | 1.34 | 8.04 | 0.34 | 2.92 | 8.94 | 329.51 | 103.18 | 277.47 | 30960 | 726 | 380 | 469 |
| uk | 2.81 | 2.56 | 0.74 | 6.11 | 1.34 | 4.06 | 0.34 | 1.57 | 4.04 | 191.15 | 70.71 | 197.53 | 33980 | 666 | 417 | 499 |
| california | 1.43 | 1.59 | 0.22 | 3.26 | 1.34 | 2.12 | 0.34 | 0.36 | 4.60 | 119.52 | 14.00 | 225.45 | 34580 | 762 | 432 | 561 |
| us_west | 5.14 | 4.51 | 0.60 | 10.26 | 1.34 | 6.90 | 0.34 | 2.67 | 4.26 | 183.91 | 19.33 | 304.10 | 32760 | 822 | 371 | 502 |
How to read these results
Download size and on-disk footprint
- net_data_package_gb: how much region data you download (compressed bundle).
- disk_data_package_total_gb: how much that region data occupies on disk once installed, broken down into:
- disk_tiles_gb (vector tiles)
- disk_router_gb (routing data)
- disk_geocoder_gb (geocoding DB)
- net_docker_gb: network download size for the Docker images used by the stack.
- disk_docker_gb: on-disk size of the Docker images (after pulling).
Note on Docker: Docker image sizes are the same regardless of the region and are a one-off cost per device (you download/pull them once, then only the region data changes).
RAM usage
- ram_total_p95_gb: p95 RAM usage of the stack during the measured run.
This is meant to represent RAM used by Corviont services, not a full device sizing recommendation. Your device OS, filesystem cache, logging, and any other workloads will add overhead on top
Endpoint performance (p95)
- tile_p95_ms
- geocode_p95_ms
- reverse_p95_ms
- route_p95_ms
These are p95 request durations during the measured run on the reference EC2 hardware. The raw results table also includes total request counts for each endpoint type (tile_total, geocode_total, reverse_total, route_total) so you can see how much traffic these p95 values were derived from.
How to use these numbers for your hardware
If you are primarily asking "Will this fit?", focus on:
- ram_total_p95_gb
- disk_data_package_total_gb
- disk_docker_gb
plus your own OS/headroom.
If you are primarily asking "Will this feel fast enough?", focus on:
- tile_p95_ms (UI smoothness while panning)
- route_p95_ms (routing responsiveness)
- geocode_p95_ms / reverse_p95_ms (search feel)
If your CPU is slower/faster per core than Graviton2, treat these timings as a baseline. Slower per-core CPUs will generally increase compute-heavy endpoints (routing/geocoding) more than tile serving.
Notes and limits
- These results reflect a single-user interactive workload on a dedicated machine.
- For multiple concurrent users or background workloads, plan additional headroom (cores and RAM) depending on your concurrency and latency targets.
- Warm-up is discarded; only the 600-second measured run is published to keep comparisons consistent.