The architecture detailed in this document aims to support a complex and dynamic agent simulation. Ensuring that this simulation is both deterministic (for consistent replays and debugging) and performant (to allow for many interacting agents) is crucial. This section outlines the key considerations and strategies employed.
Maintaining Determinism within the Scheduled/Event-Driven Model
The primary goal for determinism in this phase of design is to be “deterministic enough for single-player replays on the same machine/build.” This means that given the same initial game state (including PRNG seeds) and the same sequence of player inputs, the simulation should unfold identically every time it is run on that specific compiled version of the game. Key practices embedded in or assumed by this architecture to achieve this include:
- Fixed Simulation Tick Rate: All simulation logic advances in discrete, predictable steps, providing a consistent temporal framework.
- Seeded Pseudo-Random Number Generators (PRNGs): All sources of randomness that can affect simulation state (e.g., in appraisal variations, probabilistic event outcomes) must use a PRNG initialized with a seed that is part of the game’s initial state and is saved/loaded.
- Fixed Order of System Execution: The sequence in which core ECS systems (Perception, Need Management, Goal Generation, etc.) are updated each tick must be strictly defined and consistent.
- Deterministic Event Processing:
- Events dispatched by the Universal Scheduler for a given tick are processed in a deterministic order. If multiple events are scheduled for the exact same tick, a stable tie-breaking rule (e.g., event priority, then source entity ID, then event creation sequence number) must be applied.
- The internal logic of all event handlers and systems must be deterministic, producing the same output state given the same input state and event data.
- Controlled Floating-Point Arithmetic: While perfect cross-platform bit-for-bit floating-point determinism is a more advanced goal, for single-machine determinism, it is highly recommended to use strict compiler flags (e.g.,
/fp:strict
or equivalents) to prevent aggressive optimizations that might alter outcomes between different builds (Debug vs. Release) or even minor code changes on the same machine. Operations on floats where order matters (e.g., summing many values) should be performed in a consistent order. - No Reliance on External Non-Deterministic Factors: Core simulation logic must not depend on external factors like wall-clock time, precise thread scheduling nuances (for logic within a single tick), or uninitialized memory for its state calculations.
Strategies for Managing Event Cascades and Performance
The event-driven and scheduled nature of the architecture is designed for efficiency, but it also introduces the possibility of event cascades (one event triggering others within the same tick). Performance also depends on efficient data management.
- Managing Event Cascades (for same-tick processing):
- Event Budgeting per Tick: The main simulation loop should enforce a budget on event processing for each tick, either by limiting the total number of events processed or the cumulative time spent. If this budget is exceeded, remaining events might be deferred to the next tick, breaking immediate cascades. The “hard tick rate” supports this by requiring the tick’s processing to complete within a fixed timeframe.
- Cascade Depth Limits: A hard cap can be implemented on how many “internal” or consecutively triggered events can be processed within a single tick originating from a single initial trigger. This acts as a circuit breaker for runaway loops.
- Careful System Design: Systems should be designed to avoid unintentional tight feedback loops. State changes should have clear conditions that prevent them from immediately re-triggering the same reactive logic without some mediating factor or delay.
- Action Durations & Scheduled Completion: Many agent actions (like movement or complex interactions) naturally take multiple ticks to complete. Their outcomes are often scheduled as future events, inherently breaking immediate feedback cycles.
- Performance of Agent Data Stores:
- The
AgentMemoryStore
andAgentBeliefStore
components, which internally manage collections ofMemoryEntry
andBeliefEntry
structures, are designed to hold significant amounts of data per agent. Efficient management strategies (e.g., pruning/forgetting less significant or older entries, as discussed in Sections 3.2.2 and 3.3.2) are crucial to prevent unbounded memory growth and maintain performant querying. - Detailed optimization of query patterns and potential internal indexing within these stores is a subject for further refinement as agent complexity and numbers scale.
- The
- Scheduling Efficiency: The core principle of scheduling agent-specific events (e.g., for need thresholds, action completions) only when necessary—rather than polling every agent every tick for every possible state change—is a fundamental performance advantage of this architecture, allowing the simulation to scale to a larger number of less frequently “active” agents.
Implications of the “Deterministic for Single-Player Replays on Same Machine” Policy
Adopting this level of determinism provides significant benefits:
- Robust Debugging: Allows developers to reliably reproduce bugs by replaying scenarios with the same initial state and input sequence.
- Quality Assurance: Simplifies testing by allowing for automated tests with predictable outcomes.
- Player Experience: Enables features like saving/sharing game replays or “Tapestry seeds” with input logs that will consistently reproduce a specific gameplay experience on machines running the same game build.
- Foundation for Future Multiplayer: While not guaranteeing cross-platform lockstep out-of-the-box, it ensures the core simulation logic is sound and internally consistent. This makes transitioning to an authoritative server model for multiplayer significantly more straightforward, as the server itself can run this deterministic simulation.
This policy acknowledges that achieving bit-for-bit floating-point determinism across diverse hardware, operating systems, and compiler toolchains is a highly complex challenge, often requiring specialized techniques like fixed-point math or custom math libraries. By focusing on single-machine/build determinism initially, we achieve the most critical benefits for development and single-player features while keeping the path open for more advanced multiplayer architectures.