Green
Performance, memory, crash rate and save/load targets pass at reference quality. No automatic quality reductions active beyond what design explicitly signed off on.
We make fast, reliable games on real hardware. This page explains the targets we build to, the systems that enforce them, and why it matters — so you can tell in under a minute if our stack fits your project.
Nothing here is a marketing slogan. These are the budgets, gates and runtime tools we rely on every day to keep games smooth on Steam Deck, PinePhone and the browsers and laptops around them.
Every target device sits in a simple three-colour state. Producers get a 30-second answer. Engineers see exactly which gate flipped and why.
Performance, memory, crash rate and save/load targets pass at reference quality. No automatic quality reductions active beyond what design explicitly signed off on.
Targets pass, but the engine is applying automatic quality steps: resolution, VFX density, shadow quality, crowd counts and similar. The game still feels right, but we’re spending budget.
One or more hard targets fail. We don’t “hope” it’s fine: we either cut scope, adjust content or fix the regression before shipping.
Why this matters: producers get a clear state; engineering knows exactly which gate flipped, instead of arguing about “feels fine on my machine”.
These are targets measured by runtime traces and CI — an engineering standard, not a legal SLA. We publish the approach and instrumentation so partners can inspect or extend the same metrics on their own builds.
Why this matters: art and design know which features degrade first, and engineering avoids “works on my PC” traps.
Fixed-step control loop: Input → Simulation → Render → Present. Simulation runs at a fixed rate; rendering aligns to display timing. No hidden “variable delta drama” inside game logic.
Within a given platform build, we aim for deterministic behaviour: scene-seeded PRNG, no wall-clock timing in core logic, IO applied at frame cut-lines. Same seed + same inputs ⇒ same outcome for QA replay.
Camera and UI read the latest input samples during render prep without resimulating the world. This keeps controls feeling crisp without blowing up simulation cost.
Why this matters: QA can replay issues reliably, and designers get the same feel across devices instead of chasing phantom bugs.
We budget each frame into observable lanes. When a feature adds 2–3 ms, we know who pays for it.
Every system declares a budget and a fallback. When something tips over, we don’t just see “frame slow” — we see which lane breached, which build introduced it and what the runtime did in response.
That makes performance an ongoing conversation between design and engineering, not a panic the week before launch.
This keeps world queries predictable even when the content gets dense, and avoids surprise N² explosions in crowded scenes.
Streaming is back-pressured with per-frame budgets. Decoding, decompression and GPU uploads are sliced across frames to avoid single “big frame” stalls.
Why this matters: worlds stream in predictably, and the main loop never blocks on a hidden IO spike.
Rendering uses a frame graph with explicit resource lifetimes. No hidden main-thread sync; transient buffers exist only as long as needed.
State buckets, sort and instancing minimise API churn. The graph knows where we can batch and where we must split for readability or effects.
Hitch-Guard demotes gracefully — resolution, VFX intensity, shadows, crowd density — before any stall. We’d rather give players one step down in fidelity than a visible hitch.
Smooth motion beats raw FPS. Hitches are surfaced in CI and internal rings long before players ever see them.
At runtime we favour utility selectors with hysteresis, bounded planning and hierarchical navigation (sector → local mesh → steering). That keeps behaviour responsive without exploding CPU cost.
Test bots run scripted routes and input spam on PRs and nightly builds. Coverage and pass rates go onto an internal status board so designers can request “run this route” and get reproducible results while they iterate.
Builds use content-addressed assets: each file has a hash and size recorded in a manifest. One source, many targets — platform presets set compression, formats and budgets. Rebuilds are minimal and reproducible.
Why this matters: anyone can diff manifests between builds and see exactly what changed, instead of guessing which asset snuck in.
A debug HUD exposes lane budgets, hitch detector state, VRAM/heap usage and draw-call heatmaps. Designers see what the engine is doing when they push a scene, not just the final frame.
Sessions can export JSON traces. CI bots parse spans, compare against targets and open tickets automatically when thresholds are breached.
When a gate flips, the ticket already points to the guilty spans and lane, so we fix the real cause instead of treating symptoms.
Forward-compatible tagged binary: little-endian, length-prefixed chunks with per-chunk checksums and a whole-file hash.
Migrations are pure functions exercised against golden saves. We can fix forward, change fields and adjust systems without breaking player progress.
Retail builds are signed, with separate signed symbol bundles. No dynamic code loading in shipping configurations.
Browser/WASM builds run under a capability whitelist: no surprise network calls, no hidden file access beyond what the host page grants.
Publisher checklists stay green and attack surface stays predictable. That pays off when games go from prototype to storefront.
Latency-aware remapping preserves early-sample guarantees while keeping touch/gamepad parity. Any input path that matters for play is measured, not guessed.
We maintain a blocking test row: 200% UI scale, protanopia palette, “Reduce Motion” ON. All core paths must be reachable via keyboard-only or single-switch setups.
Accessibility is a gate in the same way performance and crashes are. If accessibility fails, the build doesn’t move forward.
New systems land with a price tag and a fallback, not surprises.
Each engine module follows a simple contract:
init(config) → tick(dt_fixed)* → render(view) → teardown()
CPU, GPU and memory targets are declared up front. Over-target behaviour demotes predictably and surfaces in the HUD.
Experimental systems go behind feature flags with their own budgets, so we can test them in rings without risking the stability of the main game.
GC-aware allocator, tiled rendering path and tighter memory ceilings for browser builds — so the web versions feel like native games, not afterthoughts.
Text-diffable behaviour logs, faster route creation and better visualisation of AI decisions, so designers can tune behaviour without spelunking through code.
Per-scene power caps so handhelds stay cool without losing feel: target wattage ranges, thermal-aware quality steps and better battery-life telemetry.
// Engine targets
// - Fixed sim (120 Hz Deck / 60 Hz PinePhone), decoupled render
// - Main-thread budget: 2 ms IO, 2.5 ms gameplay, 2 ms render submit
// - GPU lane: auto-demote at configured hitch threshold
// - Deterministic replay (within platform build): same seed + inputs ⇒ same outcome
// - Saves: forward-compatible; per-chunk CRC + whole-file hash
// - Gates (stable ring): ≤1 hitch >8 ms /10 min; 99.9% crash-free; no unbounded memory drift