Technology

We make fast, reliable games on real hardware. This page explains the targets we build to, the systems that enforce them, and why it matters — so you can tell in under a minute if our stack fits your project.

Nothing here is a marketing slogan. These are the budgets, gates and runtime tools we rely on every day to keep games smooth on Steam Deck, PinePhone and the browsers and laptops around them.

Ship Gates: Green / Yellow / Red

Every target device sits in a simple three-colour state. Producers get a 30-second answer. Engineers see exactly which gate flipped and why.

Green

Performance, memory, crash rate and save/load targets pass at reference quality. No automatic quality reductions active beyond what design explicitly signed off on.

Yellow

Targets pass, but the engine is applying automatic quality steps: resolution, VFX density, shadow quality, crowd counts and similar. The game still feels right, but we’re spending budget.

Red

One or more hard targets fail. We don’t “hope” it’s fine: we either cut scope, adjust content or fix the regression before shipping.

Why this matters: producers get a clear state; engineering knows exactly which gate flipped, instead of arguing about “feels fine on my machine”.

Cheat Sheet: Engine Targets

Core Targets

  • Simulation tick: 120 Hz (Steam Deck) · 60 Hz (PinePhone).
  • Main-thread slice (max target): 2 ms.
  • Input → photon (median target): < 50 ms (Deck) · < 60 ms (PinePhone).
  • Crash-free sessions (stable ring): ≥ 99.9%.

These are targets measured by runtime traces and CI — an engineering standard, not a legal SLA. We publish the approach and instrumentation so partners can inspect or extend the same metrics on their own builds.

Platforms & Reference Devices

  • Steam Deck (LCD/OLED) — timing, power envelope, handheld UX.
  • PinePhone Pro — touch-first, constrained CPU/GPU, aggressive streaming.
  • Desktop Linux (ARM-first) — dev workstation, capture machine, reproducible builds.
  • Browser (WASM subset) — fixed-step sim, strict memory ceilings, predictable GC windows.

Why this matters: art and design know which features degrade first, and engineering avoids “works on my PC” traps.

Engine Fundamentals

Control Loop

Fixed-step control loop: Input → Simulation → Render → Present. Simulation runs at a fixed rate; rendering aligns to display timing. No hidden “variable delta drama” inside game logic.

Determinism (Per-Platform)

Within a given platform build, we aim for deterministic behaviour: scene-seeded PRNG, no wall-clock timing in core logic, IO applied at frame cut-lines. Same seed + same inputs ⇒ same outcome for QA replay.

Late-Latching

Camera and UI read the latest input samples during render prep without resimulating the world. This keeps controls feeling crisp without blowing up simulation cost.

Why this matters: QA can replay issues reliably, and designers get the same feel across devices instead of chasing phantom bugs.

Performance & Latency Budgets

We budget each frame into observable lanes. When a feature adds 2–3 ms, we know who pays for it.

Runtime Lanes

  • Input / EarlySample — early poll; alerts on sustained p99 spikes.
  • Gameplay / Control — tickets if p99 exceeds budget over a rolling window.
  • Physics / Narrow — demotes collider detail when p99 drifts high.
  • Render / Submit — tickets + triage on budget breaches.
  • GPU / Render — “Hitch-Guard” demotes resolution/VFX at thresholds instead of stalling.

Why It Matters

Every system declares a budget and a fallback. When something tips over, we don’t just see “frame slow” — we see which lane breached, which build introduced it and what the runtime did in response.

That makes performance an ongoing conversation between design and engineering, not a panic the week before launch.

Streaming & World System

World Layout

  • Spatial tiles with cache-friendly indexing and O(1) neighbourhood queries.
  • Constant-time broadphase per tile; batched narrowphase; continuous tests for fast bodies.

This keeps world queries predictable even when the content gets dense, and avoids surprise N² explosions in crowded scenes.

Streaming Behaviour

Streaming is back-pressured with per-frame budgets. Decoding, decompression and GPU uploads are sliced across frames to avoid single “big frame” stalls.

Why this matters: worlds stream in predictably, and the main loop never blocks on a hidden IO spike.

Rendering & Frame Pacing

Frame Graph

Rendering uses a frame graph with explicit resource lifetimes. No hidden main-thread sync; transient buffers exist only as long as needed.

State Management

State buckets, sort and instancing minimise API churn. The graph knows where we can batch and where we must split for readability or effects.

Hitch-Guard

Hitch-Guard demotes gracefully — resolution, VFX intensity, shadows, crowd density — before any stall. We’d rather give players one step down in fidelity than a visible hitch.

Smooth motion beats raw FPS. Hitches are surfaced in CI and internal rings long before players ever see them.

AI Systems

Runtime Behaviour

At runtime we favour utility selectors with hysteresis, bounded planning and hierarchical navigation (sector → local mesh → steering). That keeps behaviour responsive without exploding CPU cost.

Test Bots & Coverage

Test bots run scripted routes and input spam on PRs and nightly builds. Coverage and pass rates go onto an internal status board so designers can request “run this route” and get reproducible results while they iterate.

Toolchain & Builds

Deterministic Outputs

Builds use content-addressed assets: each file has a hash and size recorded in a manifest. One source, many targets — platform presets set compression, formats and budgets. Rebuilds are minimal and reproducible.

Pre-Flight Checks

  • Textures — dimensions, color space, alpha usage.
  • Meshes — bounds, LOD sanity, degenerate triangles.
  • Audio — loudness, loops, channel layout.

Why this matters: anyone can diff manifests between builds and see exactly what changed, instead of guessing which asset snuck in.

Observability & QA Gates

In-Game HUD

A debug HUD exposes lane budgets, hitch detector state, VRAM/heap usage and draw-call heatmaps. Designers see what the engine is doing when they push a scene, not just the final frame.

Traces & Bots

Sessions can export JSON traces. CI bots parse spans, compare against targets and open tickets automatically when thresholds are breached.

Stable Ring Targets

  • 1 hitch > 8 ms per 10 minutes.
  • < 1 MB / hour unbounded memory growth.
  • 99.9% crash-free sessions.
  • Save/load stress passes without schema loss.

When a gate flips, the ticket already points to the guilty spans and lane, so we fix the real cause instead of treating symptoms.

Saves & Migrations

Save Format

Forward-compatible tagged binary: little-endian, length-prefixed chunks with per-chunk checksums and a whole-file hash.

Migration Path

Migrations are pure functions exercised against golden saves. We can fix forward, change fields and adjust systems without breaking player progress.

Security & Compliance

Build Integrity

Retail builds are signed, with separate signed symbol bundles. No dynamic code loading in shipping configurations.

WASM Capabilities

Browser/WASM builds run under a capability whitelist: no surprise network calls, no hidden file access beyond what the host page grants.

Why It Matters

Publisher checklists stay green and attack surface stays predictable. That pays off when games go from prototype to storefront.

Accessibility & Input

Input & Latency

Latency-aware remapping preserves early-sample guarantees while keeping touch/gamepad parity. Any input path that matters for play is measured, not guessed.

Ship-Blocking Tests

We maintain a blocking test row: 200% UI scale, protanopia palette, “Reduce Motion” ON. All core paths must be reachable via keyboard-only or single-switch setups.

Gate, Not Footnote

Accessibility is a gate in the same way performance and crashes are. If accessibility fails, the build doesn’t move forward.

Extensibility & Module Contracts

New systems land with a price tag and a fallback, not surprises.

Module Lifecycle

Each engine module follows a simple contract:

init(config) → tick(dt_fixed)* → render(view) → teardown()

CPU, GPU and memory targets are declared up front. Over-target behaviour demotes predictably and surfaces in the HUD.

Sandboxing Experiments

Experimental systems go behind feature flags with their own budgets, so we can test them in rings without risking the stability of the main game.

Roadmap

WASM Subset Engine

GC-aware allocator, tiled rendering path and tighter memory ceilings for browser builds — so the web versions feel like native games, not afterthoughts.

AI Authoring Tools

Text-diffable behaviour logs, faster route creation and better visualisation of AI decisions, so designers can tune behaviour without spelunking through code.

Power Envelopes

Per-scene power caps so handhelds stay cool without losing feel: target wattage ranges, thermal-aware quality steps and better battery-life telemetry.

Spec Excerpt (Copy-Paste)

// Engine targets
// - Fixed sim (120 Hz Deck / 60 Hz PinePhone), decoupled render
// - Main-thread budget: 2 ms IO, 2.5 ms gameplay, 2 ms render submit
// - GPU lane: auto-demote at configured hitch threshold
// - Deterministic replay (within platform build): same seed + inputs ⇒ same outcome
// - Saves: forward-compatible; per-chunk CRC + whole-file hash
// - Gates (stable ring): ≤1 hitch >8 ms /10 min; 99.9% crash-free; no unbounded memory drift