Role | Gameplay Systems, Gameplay Networking Systems
Team Size | 20
SNIPERPUNK is an upcoming 1v1 bullet-hell roguelike game developed by Enter CTRL. In this game, players select from a roster of unique "punks," each hailing from distinct universes—such as Cyberpunk, Steampunk, Decopunk, and Raypunk—and equipped with their own abilities and weapons. The objective is to engage in intense duels within the Stellar Nexus to determine the ultimate champion.
SNIPERPUNK is still in development, and is coming soon to Steam.
For more information and updates you can visit the official website for the game.
Working on Sniperpunk with a remote team of ~20 posed numerous technical and collaboration challenges. The team had to ensure smooth character behavior, robust systems architecture, synchronized multiplayer gameplay, flexible yet efficient item design, and effective team coordination. Below, we discuss each challenge and the best practices used to overcome them.
Challenges: Sniperpunk’s characters required fluid movement and actions, which meant designing a finite state machine for each character’s behavior (idle, run, shoot, reload, etc.). Managing transitions between these states was tricky – e.g. handling a jump while shooting or canceling a reload mid-way. Edge cases (like overlapping inputs or getting stuck in a state) could easily occur if transitions weren’t well-defined. The team also needed to keep controls responsive, so player input isn’t ignored due to rigid state rules.
Solutions & Best Practices:
Clear Transition Definitions: Every state was given strict entry/exit conditions and documented triggers for switching states. This prevented ambiguous situations where the character might get “stuck” between states. Well-defined transitions and guard conditions ensured no unexpected or undefined state could occur. For example, a “jump” input is only accepted in allowed states (like Idle or Running), and each transition is checked so that illegal state combinations are impossible.
Responsive Input Handling: To maintain snappy gameplay, the state machine allowed certain transitions to occur immediately on player input (sometimes by using higher-priority transitions or interrupts). For example, if a player pressed a dodge or block, it could interrupt a non-critical animation state so the character responds without delay. The team also implemented input buffering for short windows – if the player pressed an action slightly before a state ended, the input would still register on the earliest possible frame. These techniques ensured the controls felt responsive and players weren’t frustrated by unresponsive states.
Thorough Testing of Edge Cases: Developers wrote tests and did extensive playtesting to catch state edge cases. They simulated rapid input sequences (jumping at the exact moment of landing, etc.) to ensure the state machine handled them gracefully. By proactively testing all state transitions, the team prevented unpredictable behavior during gameplay.
Any discovered edge case (like an animation not finishing or an impossible double-transition) was fixed by adjusting the state logic or adding a fallback transition (for instance, a failsafe that returns the character to a default state if an invalid state persisted).
Challenges: The game’s architecture relied on an event system to decouple gameplay systems. With many team members adding features, events proliferated (for things like “player took damage”, “shot fired”, “ability activated”). The challenges were maintaining scalability (so the event system could handle dozens of event types and many listeners), preventing conflicts (e.g. two systems responding in unintended ways or events overwriting each other), and ensuring efficient event dispatch (to avoid slowdowns or memory leaks).
Solutions & Best Practices:
Consistent Event Naming and Usage: The team established naming conventions and documentation for events to avoid duplication or confusion. Each event had a clearly defined purpose and payload structure. This prevented conflicts such as two events with similar meanings or misusing an event for multiple purposes. It also helped new team members quickly understand what each event did. (For instance, “PlayerShot” and “PlayerTakeDamage” are distinct events with different semantics, which the team documented.)
Avoiding Unnecessary Event Spam: The developers were careful to use events only when appropriate. High-frequency updates (like a character’s position every frame) were not sent as events but handled through direct function calls or optimized update loops, since sending thousands of events per second would be inefficient. In fact, simply dispatching events is rarely a bottleneck – if the event system slows down, it’s often a sign of too many events or misusing events for things better handled by direct code. Following this principle, critical real-time logic stayed in regular update methods, while events were used for less-frequent or decoupled notifications.
Memory Management – Unsubscribe to Prevent Leaks: A major best practice was making sure to unsubscribe event listeners when objects were destroyed or when they no longer needed to listen. The team created helper functions to unregister listeners on level unload or object removal. This diligence ensured that, for example, a UI element that was removed would not continue receiving game events in the background.
Challenges: Sniperpunk features online 1v1 matches, so keeping both clients in sync was critical. The team faced consistency issues (making sure each player sees the same game world without divergence), latency problems (network delay could cause stutters or delayed actions), and the need for network performance optimization (minimizing bandwidth and ensuring smooth gameplay even on slower connections). Fast-paced bullet-hell gameplay meant even slight desynchronization or lag could ruin the experience.
Solutions & Best Practices:
Authoritative Server Model: The game uses a server-authoritative architecture – either a dedicated server or one of the clients acting as host. This means the server is the single source of truth for the game state, preventing clients from ever disagreeing on what happened. All critical state updates (player positions, health, bullet trajectories) are decided server-side and then broadcast to clients. Even if a client is hacked or lagging, it can’t enforce a different game state because the server’s version prevails
. This ensured consistency: both players always eventually see the same outcome of each action, eliminating advantages from cheating or packet loss (at worst, a laggy client’s actions might be delayed or corrected, but never divergent).
Client-Side Prediction: To combat latency and keep controls feeling instant, Sniperpunk implemented client-side prediction for player actions. When you input a movement or shot, your own client immediately simulates it locally before confirmation from the server
. The game was designed to be deterministic (given the same input, the outcome is predictable), so the client’s prediction would usually match what the server would later say. This eliminated the noticeable delay between pressing a button and seeing the effect – your character moves and shoots without waiting for a round-trip to the server
. For example, if you dash away from a bullet on your screen, the dash happens right away locally, making the game responsive even if 100ms of network latency exists.
Server Reconciliation: Of course, sometimes the client’s prediction might differ slightly from the server’s authoritative result (due to timing or unexpected events). To handle this, the client listens for server updates and reconciles any differences. If the server corrects a player’s position or other state, the client will smoothly adjust to it. The team ensured that these corrections were as seamless as possible – small differences were interpolated to avoid jarring pops. For instance, if your client predicted you were at position X=5 but server says X=4 (maybe you hit an obstacle server-side), the game would move your character the short distance back in a barely noticeable manner. This reconciliation process fixed mismatches so the clients stayed in sync without the player feeling major “snaps” in position.
Network Traffic Optimization: The team optimized the amount and frequency of data sent across the network to maintain performance. Only essential gameplay data was synchronized – for example, player inputs, critical state changes, and periodic snapshots of positions. Less critical details (like purely cosmetic effects) were handled locally or with less frequent updates. They also compressed data where possible, using small data types for things like coordinates and state flags. By sending concise, fixed-size packets at a steady tick rate, they reduced bandwidth usage and avoided spikes.
Challenges: Sniperpunk features a roguelike progression with various items and buffs that players can pick each round. The design team needed to easily create and tweak these items (e.g. new weapons, power-ups, temporary buffs) without programmer intervention for each new asset. Thus, the system had to be flexible and designer-friendly. At the same time, it needed to be efficient at runtime – too much flexibility (like heavy scripting for each item) could hurt performance or complicate the codebase. The challenge was balancing flexibility with performance, ensuring that adding new content was easy but the game still ran smoothly.
Solutions & Best Practices:
Optimized Buff Application: The buff system was built with performance in mind. Instead of every buff running heavy logic every frame, most buffs were event-driven or time-triggered. When applied, a buff might modify some character stats upfront (for instance, +10% damage) and set a timer. During gameplay, the system doesn’t necessarily tick every buff individually in an expensive way; it can simply check timers periodically or use a single manager to decrement buff durations. By fine-tuning when and how buff effects are calculated, the team ensured the flexibility didn’t lead to lag even if players stacked many buffs at once.
Conclusion: Working on Sniperpunk with a large remote team required tackling a variety of technical challenges with smart engineering solutions, while also implementing strong collaboration practices. By using clear state machines, a scalable event system, robust networking techniques, flexible data-driven item systems, and agile remote workflows, the team was able to overcome these challenges. The result was a responsive, feature-rich game and a development process that kept the team coordinated despite the distance. Each challenge strengthened the project’s foundation: the gameplay systems became more resilient and tunable, and the team became a well-oiled, communicative unit. These lessons and best practices can be applied to many game projects developed by distributed teams.