Server-authoritative from day one
How the architecture is set up so anti-cheat is possible later without months of pain. Photon Fusion 2, Edgegap, and why client prediction lives next to server reconciliation.
There's a category of architectural decision in multiplayer games that you have roughly twelve weeks to make. After that the cost of changing your mind is months of rewrites, sometimes years of pain, and sometimes the project never recovers. Server-authoritative-everything is one of those decisions. Open Hours is built on it from the first commit. Here's why, and what it actually means in practice.
The principle, in one sentence
The client predicts what should happen for responsiveness. The server is the only authority on what actually happens. When they disagree, the server wins and the client snaps to the corrected state. That's it. Everything else is a consequence.
In a server-authoritative shooter, the client says "I think I shot them" and the server independently checks: did the client have line of sight? Was the target in range? Was the weapon off cooldown? Did the shot happen at a timestamp the server can validate against its world state? If any of those are false, the shot didn't happen — regardless of what the client rendered.
The opposite — client-authoritative — is what most casual or jam games ship with by default. The client says "I shot them, take 25 damage" and the server believes it. That feels great in development. Latency feels instant. Hit registration feels perfect. And then someone modifies their client and your game becomes unplayable.
Client predicts for responsiveness. Server holds the truth. Mismatch → server wins.
Capture pending
Why it has to be there from day one
Retrofitting authority is one of the hardest refactors in game development. It's not a code change — it's an inversion of where truth lives. Every system that touches gameplay state has to be rewritten:
- Combat: who decides damage, who decides hits, who decides death
- Movement: who validates positions, who handles teleport-detection, who arbitrates collisions
- Loot: who awards items, who decides drops, who validates pickups
- Currency: who mints, who deducts, who reconciles ledgers across sessions
- Cooldowns, mana, charges: who decrements, who refunds, who detects abuse
- Match results: who computes the score, who writes it back to the database
If you ship without this, every cheater that connects to your game has unilateral access to all of those systems. Your "anti-cheat" is then a posture: detecting modified clients after the fact, banning, hoping. The architecture you'd actually need to prevent the cheat doesn't exist — and bolting it on later means rewriting all of the above. That's the months-of-pain answer.
By building authoritative from day one, anti-cheat integration later becomes a config-level decision (Easy Anti-Cheat, BattlEye, etc.) rather than an architectural one. The pipes are already laid. We just haven't connected the alarm system yet.
How Photon Fusion 2 fits in
Open Hours uses Photon Fusion 2 for netcode. The honest reason: it ships the loop fast.
Fusion has a clean authority model — every NetworkObject has an explicit StateAuthority — and the
Networked-property + RPC_ pattern makes it obvious whether you're predicting or asserting.
There are tradeoffs (Fusion isn't tuned for sub-16ms competitive FPS netcode in the way that custom solutions
are), and we plan to re-evaluate at Month 6 if the game proves out. But for Week 1, it ships the loop and
enforces the discipline.
The pattern looks roughly like this. Client predicts a movement input. The server runs the same simulation with the authoritative state. If they diverge, the client gets reconciled — usually invisibly, sometimes with a visible snap if the divergence is large enough. The server's view is what writes back to the persistent layer.
// Client: predicts immediately
public override void FixedUpdateNetwork() {
if (GetInput(out NetworkInputData input)) {
// Local prediction renders instantly
controller.Move(input.movement * speed * Runner.DeltaTime);
}
}
// Server: validates + writes the canonical position
[Rpc(RpcSources.InputAuthority, RpcTargets.StateAuthority)]
public void RPC_RequestShot(Vector3 origin, Vector3 direction) {
// Server-side check: LoS, range, cooldown, anti-teleport, anti-rate-limit
if (!ValidateShot(origin, direction)) return;
ApplyDamage(...);
} Edgegap, and the vendor-abstraction lesson
The match server itself runs on Edgegap — Linux dedicated servers in Docker containers, deployed per-match on demand. This wasn't the original plan. The first version of the design doc had Hathora as the match-hosting layer.
Hathora got acquired by Fireworks AI in early 2026, and the gaming platform shut down on May 5. We were a week from starting Week 1. The migration to Edgegap took a day, and the reason it took a day instead of two weeks is that the deployment layer was abstracted from the start. The match server is just a Docker container with a defined start command. The thing that spawns that container can swap — Hathora, Edgegap, GameFabric, self-hosted Coolify — without touching the game code.
There's exactly one Supabase Edge Function that calls Edgegap directly: match-spawn. That's the
seam. Everything else thinks it's calling a generic "spin up a match server" API. The cost of that
abstraction up front was about 100 extra lines of code. The cost of skipping it would have been a week
of rewriting in a panic.
Match flow: queue → Edge Function → Edgegap deploys container → 8 clients connect → match server is the truth.
Capture pending
What this looks like in practice
The full request path for a match looks like this:
- Client clicks "Queue Arena" in the town
- Client tells the matchmaking layer "I want to play Arena"
- Matchmaking finds 7 other humans (or fills with bots after 60s)
- Matchmaking calls the
match-spawnEdge Function on Supabase - Edge Function calls Edgegap's API with a region preference
- Edgegap deploys a new container in <5s, returns an IP + port
- Edge Function returns connection details to the matchmaking layer
- All 8 clients connect via Photon Fusion 2 to that match server
- Match plays out under server authority
- On match end, server writes results to Supabase, fires telemetry, kills the container
- Players return to town with results in hand
The bartender, when you walk back into town, knows what just happened — because the match server wrote it to Supabase before the container died. That's the loop closing.
The cost of doing this up front
Server-authoritative architecture is more code, more careful design, more places where you have to ask "who owns this state, and who's allowed to modify it?" It's slower to ship the first prototype than a client-authoritative version of the same thing.
The payoff is everything that comes after. Anti-cheat integration becomes config. Disconnect/reconnect becomes a recoverable session token problem instead of a state-loss problem. Match-result integrity is free. Anti-fraud on the eventual cosmetics economy is free. And the project doesn't have to ever go through the rewrite that haunts every shooter that didn't make this call early.
Three of the four Week 1 non-negotiables flow from this decision. The fourth — tiered NPC inference — is a different shape of problem, and the next devlog is about how we make AI dialogue work without bankrupting the project.
No drip campaigns, no marketing fluff. Just the next real thing.