Sense Phase Systems

The Sense Phase is the agent’s gateway to understanding its environment and its own internal state changes. Systems in this phase are responsible for gathering raw data from the external world and processing internal or external events that inform the agent’s awareness. This information becomes the basis for memory formation, belief updates, and subsequent decision-making.

PerceptionSystem

The PerceptionSystem is responsible for simulating an agent’s ability to detect relevant entities, objects, and phenomena in its immediate surroundings.

  • Responsibility:
    • To identify entities within an agent’s sensory capabilities (defined by range, cones, or other sensory parameters).
    • To update a direct representation of the agent’s current understanding of its immediate environment.
    • To detect significant changes in perception (entities entering/leaving range, significant state changes in perceived entities) and, if subscribers exist, dispatch events for these changes.
  • Inputs:
    • Agent’s Position component.
    • Agent’s SensoryCapabilities (e.g., range, field of view, specific sense modalities).
    • The global SpatialHash (or equivalent spatial index) for efficient proximity queries.
    • The state of the agent’s CurrentPercepts component from the previous tick (facilitated by a double-buffering state management system for the engine, allowing access to current_tick_state and previous_tick_state).
  • Process:
    1. For each active agent, the PerceptionSystem queries the SpatialHash using the agent’s Position and SensoryCapabilities to get a list of potentially perceivable entities.
    2. This list is filtered (e.g., for line-of-sight, occlusion, specific entity types the agent is currently attuned to).
    3. The agent’s CurrentPercepts component is updated with this new set of perceived entities and their relevant observed states (e.g., position, type).
      // Component: CurrentPercepts
      // Stores entities currently within reliable sensory range.
      perceived_entities: List<{
          entity_id: EntityId,
          entity_type: TypeId,
          position: Vector3,
          // ... other directly observable details like hostility status if apparent
          _internal_seen_this_tick_flag: bool // Used for change detection
      }>
    4. Change Detection: The system compares the newly updated CurrentPercepts with the CurrentPercepts state from the previous tick.
    5. Event Dispatch (Conditional): If significant differences are found (e.g., a new ThreatSource enters range, a known FoodSource disappears), and if other systems have “subscribed” to be notified of such perceptual changes for this agent, the PerceptionSystem generates and dispatches specific events via the global Event Bus. Examples:
      • NewEntityInPerceptionRangeEvent(perceiving_agent_id, new_entity_id, new_entity_type, ...)
      • EntityLeftPerceptionRangeEvent(perceiving_agent_id, left_entity_id, ...)
      • PerceivedEntityStateChangedEvent(perceiving_agent_id, entity_id, changed_property, ...)
  • Scheduling: The PerceptionSystem typically runs every tick for agents that are active and aware. Its frequency might be reduced for agents in low-activity states or unobserved POIs.

This hybrid approach allows an agent’s internal cognitive systems to directly query the CurrentPercepts component for an immediate understanding of its surroundings, while also enabling other reactive systems to be driven by events triggered by significant perceptual changes.

MemoryRecordingSystem

This system is responsible for converting significant experiences – whether from direct perception, the outcomes of an agent’s own actions, or other witnessed events – into persistent MemoryEntry structures within the agent’s AgentMemoryStore.

  • Responsibility:
    • To process various types of incoming events that represent potentially memorable experiences for an agent.
    • To construct detailed MemoryEntry structures based on these events.
    • To add these MemoryEntrys to the relevant agent’s AgentMemoryStore, managing storage limits and significance.
  • Inputs (Listening to Events on the Event Bus):
    • PerceptionChangeEvents (e.g., NewEntityInPerceptionRangeEvent, PerceivedEntityStateChangedEvent) generated by the PerceptionSystem, especially those flagged as novel or highly salient (e.g., first sighting of a threat).
    • InteractionOutcomeEvent (generated by action execution systems in the Act Phase, detailing the result of an agent’s actions like eating, attacking, fleeing, etc.).
    • DamageTakenEvent, HealthRestoredEvent, NeedSatisfiedEvent, GoalCompletedEvent, GoalFailedEvent.
    • Potentially, CommunicationReceivedEvent (if the content is deemed memorable).
  • Process:
    1. For each relevant incoming event pertaining to an agent:
    2. Determine Memorable?: Not every tiny event needs to be memorized. A filtering step based on event type, intensity, or agent’s current state (e.g., high alert) might occur.
    3. Calculate Significance & Emotional Valence: Based on the event’s nature, its outcome, its relevance to the agent’s current needs and goals, and possibly the agent’s personality traits, a significance score and emotional_valence are determined for the potential memory.
    4. Construct MemoryEntry: A MemoryEntry is created, populating fields like timestamp, event_type, location, involved_entities, outcome, calculated significance and emotional_valence, and any specific details derived from the event payload.
    5. Store Memory: The new MemoryEntry is added to the agent’s AgentMemoryStore.recent_memory_entries list. The AgentMemoryStore’s internal logic (or this system) handles pruning/forgetting based on capacity, recency, and significance (as detailed in Section 3.2.2).
  • Scheduling: The MemoryRecordingSystem processes events that have occurred within the current tick or are dispatched for the current tick. It effectively translates transient event data into more persistent agent knowledge.

Together, the PerceptionSystem and MemoryRecordingSystem form the primary input pathway for an agent’s cognitive processes, enabling it to be aware of its current situation and build a history of its experiences.

Motivate Phase Systems

Once an agent has a sense of its environment and internal state (via perceptions and memories), the Motivate Phase systems translate this awareness, particularly the status of its fundamental Needs, into directed intentions or Goals. These goals provide the primary impetus for the agent’s subsequent planning and actions.

NeedManagementSystem

This system is responsible for overseeing the dynamic state of an agent’s various Need components. It doesn’t actively decay needs every tick per agent (as that’s handled by the projection and scheduling logic within the Need components themselves), but rather manages the events and updates related to needs crossing significant thresholds.

  • Responsibility:
    • To process NeedThresholdReachedEvents dispatched by the Universal Scheduler (e.g., when a need like SustenanceNeed is projected to cross its goal_trigger_threshold or critical_threshold).
    • To facilitate access to the current interpolated value of any Need for other systems.
    • To ensure that when a Need component’s parameters are changed by external factors (e.g., consumption of food, change in activity level affecting decay rate), its future scheduled threshold events are correctly cancelled and re-scheduled.
  • Inputs:
    • NeedThresholdReachedEvent(agent_id, need_type, threshold_type) events from the Universal Scheduler.
    • Requests from other systems to get current need values.
    • Notifications/calls from other systems when a Need component’s base values or decay rates are directly altered.
  • Process:
    1. Handling NeedThresholdReachedEvent:
      • When an event like NeedThresholdReachedEvent(agent_id, need_type: SUSTENANCE, threshold_type: GOAL_TRIGGER) is processed for an agent:
        • The system verifies the current interpolated value of the specified need_type on the agent_id.
        • If the condition is met (e.g., sustenance is indeed below goal trigger), it will typically generate a new, more specific event to trigger goal consideration, such as [NeedName]GoalCheckRequiredEvent(agent_id) (e.g., SustenanceGoalCheckRequiredEvent). This decouples need monitoring from the specifics of goal generation logic.
        • The system ensures that the next relevant threshold event for that need (e.g., the critical_threshold event if the goal_trigger_threshold just fired) is still appropriately scheduled, or re-schedules it if necessary. The _scheduled_goal_trigger_event_id (or critical) on the Need component is cleared as this event has now fired.
    2. Providing Current Need Value:
      • The system may offer a utility function like GetCurrentNeedValue(agent_id, need_type, current_tick): float. This function would access the agent’s relevant Need component, retrieve its current_value_at_last_update, last_updated_tick, and decay_rate_per_tick, and return the accurately interpolated value for the current_tick.
    3. Handling External Need Changes & Re-scheduling:
      • If an external system (e.g., EatFoodSystem) directly modifies a Need component (e.g., increases SustenanceNeed.current_value_at_last_update and updates its last_updated_tick), that system (or a utility function called by it, possibly within NeedManagementSystem) must ensure that any previously scheduled _scheduled_goal_trigger_event_id and _scheduled_critical_event_id for that need on that agent are cancelled with the Universal Scheduler.
      • New threshold events are then projected and scheduled based on the Need’s new state, and their IDs are stored back in the Need component (as described in Section 3.1.2). This keeps the scheduled events synchronized with the agent’s actual condition.
  • Scheduling: The NeedManagementSystem primarily reacts to scheduled NeedThresholdReachedEvents. Its utility functions for getting current values or re-scheduling would be called synchronously by other systems when needed.

This system ensures that needs are efficiently monitored and that their progression correctly triggers subsequent cognitive processes like goal generation.

GoalGenerationSystem

(e.g., SustenanceGoalGenerator, SafetyGoalGenerator)

The GoalGenerationSystem is responsible for instantiating Goal components on an agent when its internal state (primarily unmet Needs) or external stimuli suggest a specific objective should be pursued. There might be several specialized goal generator systems (e.g., one for sustenance, one for safety, one for social goals), or a more unified system that handles various triggers.

  • Responsibility:
    • To listen for events indicating that a need has crossed a goal-triggering threshold (e.g., [NeedName]GoalCheckRequiredEvent from the NeedManagementSystem).
    • To listen for other potential goal-inducing stimuli (e.g., a NewEntityInPerceptionRangeEvent for a significant threat might directly trigger a safety goal).
    • To check if a similar goal is already active to prevent duplicates.
    • To create and add the appropriate Goal component (e.g., SeekSustenanceGoal, SeekSafetyGoal, InvestigateSoundGoal) to the agent if conditions are met.
  • Inputs:
    • [NeedName]GoalCheckRequiredEvent(agent_id) events.
    • Relevant PerceptionChangeEvents (e.g., sighting of a known threat or a highly novel stimulus).
    • The agent’s current Need levels (queried via NeedManagementSystem.GetCurrentNeedValue or directly from components if within the same tick and after need updates).
    • The agent’s existing active Goal components (to avoid duplication).
    • The agent’s AgentBeliefStore and AgentMemoryStore (e.g., a memory of a recent attack might make a safety goal more likely to form even if immediate danger isn’t perceived).
  • Process:
    1. Event Trigger Processing:
      • Upon receiving an event like SustenanceGoalCheckRequiredEvent(agent_id):
        • The system checks if the agent_id already has an active SeekSustenanceGoal component. If so, it might only update its priority or do nothing further for this specific trigger.
        • If no such goal exists, it retrieves the current SustenanceNeed value. If still below the threshold, it proceeds to create the goal.
    2. Stimulus-Based Trigger Processing (Example: Threat Perception):
      • Upon receiving NewEntityInPerceptionRangeEvent(perceiving_agent_id, new_entity_id, new_entity_type) where new_entity_type is believed to be a ThreatSource (checked against AgentBeliefStore):
        • The system checks if perceiving_agent_id already has an active SeekSafetyGoal (perhaps related to a different threat or a general sense of unease).
        • If not, or if this new threat is deemed more immediate/dangerous (based on beliefs about new_entity_type or memories associated with new_entity_id), it creates/updates a SeekSafetyGoal.
    3. Goal Component Creation:
      • When a goal is to be created (e.g., SeekSustenanceGoal):
        • A new SeekSustenanceGoal component is instantiated.
        • Its priority is calculated. This is a crucial step and can be influenced by:
          • The severity of the triggering need (e.g., 1.0 - SustenanceNeed.current_value).
          • Beliefs about the importance of this need/goal.
          • The presence of other conflicting needs or threats (e.g., sustenance goal priority might be lowered if a high-danger threat is also present, deferring to a safety goal).
          • Personality traits (future refinement).
        • The new Goal component is added to the agent.
      • This might also generate an internal NewGoalAddedEvent(agent_id, goal_type) to trigger subsequent systems like GoalPrioritizationSystem or PotentialActionDerivationSystem within the same tick or the next, depending on event processing rules.
  • Scheduling: The GoalGenerationSystem typically processes events dispatched for the current tick. It reacts to conditions becoming true (needs unmet, threats appearing) by creating persistent Goal components that then drive further AI logic.

This system acts as a crucial bridge, translating an agent’s internal state and external perceptions into actionable objectives. The sophistication of its trigger conditions and priority calculations will significantly influence the believability of the agent’s motivations.

GoalPrioritizationSystem

Agents may often find themselves with multiple active Goal components simultaneously (e.g., needing sustenance, noticing a potential threat, and being curious about a sound). The GoalPrioritizationSystem is responsible for evaluating these concurrent goals and determining which one(s) the agent should focus on, effectively selecting or ranking the CurrentActiveGoal that will drive immediate action planning.

  • Responsibility:
    • To identify all active Goal components on an agent.
    • To assess and rank these goals based on a variety of factors, including their intrinsic priority, situational context, agent beliefs, and memories.
    • To designate a primary CurrentActiveGoal (or a ranked list of top goals) that the subsequent planning and action selection phases will address.
  • Inputs:
    • All active Goal components on an agent (e.g., SeekSustenanceGoal.priority, SeekSafetyGoal.priority).
    • The agent’s current Need levels (to understand the urgency behind need-driven goals).
    • The agent’s CurrentPercepts (e.g., the immediate presence of a high-level threat might elevate a SeekSafetyGoal’s effective priority).
    • The agent’s AgentBeliefStore (e.g., beliefs about the consequences of ignoring a certain goal, or beliefs about the difficulty/risk of pursuing one goal versus another).
    • The agent’s AgentMemoryStore (e.g., recent negative experiences related to ignoring a safety goal might temporarily boost its ranking).
    • (Future Refinement: Agent’s personality traits, mood, current tasks/roles).
  • Process:
    1. Gather Active Goals: The system collects all entities with one or more active Goal components.
    2. Calculate Effective Priority for Each Goal: For each active goal on an agent, its “effective priority” or “situational urgency” is calculated. This is more than just the priority field stored in the goal component itself; it’s a dynamic assessment. Factors include:
      • Base Priority: The priority value set when the goal was generated (often tied to need urgency).
      • Contextual Modifiers (from Percepts, Beliefs, Memory):
        • Threat Amplification: If a SeekSafetyGoal is active and a ThreatSource is currently perceived in CurrentPercepts, its effective priority is significantly increased.
        • Opportunity Cost/Benefit (Beliefs): If pursuing Goal A (e.g., InvestigateSound) is believed to lead to a high reward but also expose to danger, while Goal B (SeekSustenance) is low risk but essential, this influences ranking.
        • Past Experience (Memory): A memory of recently almost starving might make SeekSustenanceGoal maintain a higher effective priority even if another, less critical goal, has a slightly higher base priority.
        • Goal Interdependencies (Advanced): Goal A might be a prerequisite for Goal B.
    3. Select CurrentActiveGoal:
      • The goal with the highest calculated effective priority is typically selected as the CurrentActiveGoal.
      • This might involve adding a specific component like CurrentActiveGoalFocus(goal_entity_id_or_type) to the agent, or updating a field within the agent’s main “AI state” component.
      • In more complex scenarios, the system might maintain a short, ranked list of top N goals, allowing the agent to potentially interleave actions or quickly switch if the top goal becomes blocked. For this draft, selecting a single primary goal is sufficient.
    4. Handling Goal Switching (Conceptual): If the CurrentActiveGoal changes from one tick to the next (e.g., a new, more urgent threat appears), this system is responsible for flagging this change. This might trigger an interrupt for any ongoing action related to the previous goal, forcing a re-planning cycle.
  • Scheduling: The GoalPrioritizationSystem would typically run each tick for agents with multiple active goals, or whenever a new significant goal is added or a major contextual change occurs (e.g., new high-priority threat perceived). Its output (the CurrentActiveGoal) directly feeds into the Plan & Appraise phase systems.

This system ensures that agents behave rationally by focusing their efforts on what is most important or urgent in their current situation, considering a holistic view of their needs, environment, and internal knowledge. The sophistication of the “effective priority” calculation is a key area for tuning agent personality and intelligence.

Plan & Appraise Phase Systems

Once an agent has a prioritized CurrentActiveGoal, the Plan & Appraise phase systems are responsible for figuring out how to achieve that goal. This involves first deriving potential sequences of actions (plans) that could satisfy the goal, and then evaluating or “appraising” those potential actions to select the most suitable one based on the agent’s current knowledge, beliefs, and the perceived state of the world.

**PotentialActionDerivationSystem

This system is tasked with generating a set of candidate actions or short action sequences that an agent could take to address its CurrentActiveGoal.

  • Responsibility:
    • Based on the type of CurrentActiveGoal, identify relevant known strategies or behaviors.
    • Query the agent’s knowledge (KnownFoodLocation from CurrentPercepts or AgentMemoryStore, AgentBeliefStore about object properties, etc.) to instantiate these strategies into concrete PotentialAction candidates.
  • Inputs:
    • The agent’s CurrentActiveGoal (e.g., SeekSustenanceGoal, SeekSafetyFromThreat(ThreatEntityID)).
    • Agent’s CurrentPercepts (e.g., locations of visible food, threats, escape routes).
    • Agent’s AgentMemoryStore (e.g., remembered locations of resources not currently in direct perception).
    • Agent’s AgentBeliefStore (e.g., beliefs about what actions are effective for certain goals, or what objects can be interacted with in certain ways).
  • Process:
    1. Identify Goal Type: The system examines the CurrentActiveGoal.
    2. Retrieve Applicable Strategies/Behaviors: Based on the goal type, it accesses a set of predefined (or learned, in advanced agents) strategies.
      • For SeekSustenanceGoal:
        • Strategy 1: “Go to known/perceived food and eat.”
        • Strategy 2: “Search for new food sources.”
        • Strategy 3 (if applicable): “Request food from another agent.”
      • For SeekSafetyFromThreat(ThreatEntityID):
        • Strategy 1: “Flee from threat.”
        • Strategy 2 (if capable): “Hide from threat.”
        • Strategy 3 (if capable & appropriate): “Confront/Attack threat.”
    3. Instantiate PotentialAction Candidates: For each applicable strategy, the system attempts to generate one or more concrete PotentialAction data structures.
      • Example: For “Go to known/perceived food and eat”:
        • Query CurrentPercepts and AgentMemoryStore for KnownFoodLocations.
        • For each found food location, create a PotentialAction like: { action_sequence: [MoveTo(FoodLocationX), Eat(FoodSourceAtX)], associated_goal: SeekSustenanceGoal, estimated_outcome: ... }
      • Example: For “Flee from threat”:
        • Identify viable escape directions (e.g., away from ThreatEntityID’s position, towards known safe zones based on AgentMemoryStore or AgentBeliefStore).
        • Create PotentialActions like: { action_sequence: [MoveTo(SafePointY)], associated_goal: SeekSafetyFromThreat, estimated_outcome: ... }
    4. Output: A list of PotentialAction data structures is passed to the ActionAppraisalSystem. These are not yet committed actions, just possibilities.
  • Scheduling: This system runs when an agent has a CurrentActiveGoal but no committed plan of action, or if its current plan is invalidated or completed and the goal persists.

ActionAppraisalSystem (Orchestrator)

This is a critical cognitive system where the agent evaluates the generated PotentialAction candidates to determine their desirability. It acts as an orchestrator, invoking various specialized evaluators.

  • Responsibility:
    • For each PotentialAction derived for the CurrentActiveGoal, calculate an overall “appraisal score” or “utility.”
    • This score reflects the agent’s “opinion” of the action, considering its likely benefits, costs, risks, and alignment with its internal state.
  • Inputs:
    • The list of PotentialAction candidates from the PotentialActionDerivationSystem.
    • The agent’s CurrentActiveGoal.
    • The agent’s full core state: Needs, CurrentPercepts, AgentMemoryStore, AgentBeliefStore.
  • Process (Orchestration):
    1. For each PotentialAction in the list: 2. Initialize an appraisal context for this action. 3. Invoke a series of AppraisalCriterionEvaluator functions/sub-systems. Each evaluator focuses on a specific aspect of the action: * EvaluateExpectedNeedSatisfaction(action, agent_state, target_need): How well is this action expected to satisfy the primary need driving the CurrentActiveGoal? (e.g., checks FoodSource.sustenance_value for an Eat action, modified by beliefs about that food type). Returns a benefit score. * EvaluateImpactOnOtherNeeds(action, agent_state): How might this action affect other needs? (e.g., a long travel action might negatively impact RestNeed). Returns a cost/benefit score for secondary needs. * EvaluateRiskToSafety(action, agent_state): Assesses risks based on CurrentPercepts (e.g., path to food goes near a perceived threat), AgentMemoryStore (e.g., “last time I went there, I was attacked”), and AgentBeliefStore (e.g., “this area is believed to be dangerous”). Returns a risk score (e.g., probability of negative outcome or severity). * EvaluateTimeAndEnergyCost(action, agent_state): Estimates the time and/or energy expenditure required for the action (e.g., travel distance). Returns a cost score. * EvaluateLikelihoodOfSuccess(action, agent_state): Based on AgentBeliefStore (e.g., “I believe I am capable of performing this action”) and CurrentPercepts (e.g., “the path is not blocked”). Returns a probability. * EvaluateResourceAvailability(action, agent_state): Checks if the agent possesses necessary tools or resources. * (Future: EvaluateSocialConsequences, EvaluateAlignmentWithPersonalityTraits) 4. Each AppraisalCriterionEvaluator queries the relevant parts of the agent’s state (Needs, Beliefs, Memory, Percepts) to compute its specific score contribution. 5. The ActionAppraisalSystem aggregates these individual scores into a single TotalAppraisalScore for the PotentialAction. The aggregation method can be a weighted sum, a multiplicative approach, or a more complex utility function. The weights themselves could be influenced by the agent’s personality or current dominant Need. PotentialAction.appraisal_score = calculate_aggregate_score(score_from_evaluator1, score_from_evaluator2, ...)
  • Scheduling: This system runs after PotentialActionDerivationSystem has produced candidates for the CurrentActiveGoal.

The AppraisalCriterionEvaluator functions are where much of the agent’s specific intelligence, knowledge, and biases are encoded. Their modularity allows for a highly extensible and tunable appraisal process, enabling different agent archetypes to “think” about their options in distinct ways.

Act Phase Systems

Once a PotentialAction has been selected through appraisal, the Act Phase systems are responsible for translating that decision into concrete operations within the game world. This involves initiating the chosen action, managing its execution over time (if it’s not instantaneous), and handling its eventual outcome.

4.1. ActionSelectionSystem This system makes the final commitment to an action based on the appraisals.

  • Responsibility:
    • To review the PotentialAction candidates and their associated TotalAppraisalScore (as calculated by the ActionAppraisalSystem).
    • To select the “best” action to pursue for the CurrentActiveGoal.
    • To instantiate the necessary top-level action component(s) on the agent to begin execution of the chosen action sequence.
  • Inputs:
    • The list of PotentialAction candidates, each with its TotalAppraisalScore.
    • The agent’s CurrentActiveGoal.
  • Process:
    1. Select Best Action: The system typically selects the PotentialAction with the highest TotalAppraisalScore.
      • (Future Refinements: It might incorporate randomness for less predictable behavior if scores are very close, or apply “satisficing” logic – picking the first “good enough” action rather than always the absolute optimal, to save computation or simulate bounded rationality).
    2. Instantiate Action Component(s): For the chosen PotentialAction (which may be a sequence like [MoveTo(X), Eat(Y)]), the system instantiates the first concrete action component from that sequence.
      • Example: If the chosen action is MoveTo(FoodLocationX) then Eat(FoodSourceAtX), it adds a MoveToTargetAction component to the agent:
        // Add Component: MoveToTargetAction
        // target_position = FoodLocationX.position
        // target_entity_id = FoodLocationX.food_entity_id
        // goal_type_driving_action = SEEK_SUSTENANCE
        // intended_on_arrival_interaction = EAT_FOOD_INTERACTION
        // _scheduled_arrival_event_id = None // To be set by MovementSystem
    3. Update Agent State: The agent’s internal state might be updated to reflect that it is now “busy” or “executing a plan.” The CurrentActiveGoal component might have its status updated (e.g., to EXECUTING_PLAN).
  • Scheduling: This system runs after the ActionAppraisalSystem has completed its evaluations for the current tick and CurrentActiveGoal.

MovementSystem (Handles MoveToTargetAction) This system is responsible for actually moving agents through the world when they have an active MoveToTargetAction.

  • Responsibility:
    • To calculate paths to the target position (if not a direct line).
    • To update the agent’s Position component incrementally each tick.
    • To detect arrival at the target or if movement is blocked.
    • To schedule an ArrivalAtTargetEvent or MovementBlockedEvent with the Universal Scheduler.
  • Inputs:
    • Agent entities with an active MoveToTargetAction component and a Position component.
    • World collision data / navigation mesh (for pathfinding).
  • Process (Each Tick for an Agent with MoveToTargetAction):
    1. Pathfinding (If Needed): If no path is yet calculated or the current path is invalidated (e.g., by new obstacles), attempt to find a path to MoveToTargetAction.target_position. If no path, schedule MovementBlockedEvent.
    2. Move Agent: Update the agent’s Position component along the path by a distance determined by its speed and deltaTime.
    3. Check for Arrival/Blockage:
      • If agent reaches target_position (or is within a small threshold): Schedule an ArrivalAtTargetEvent(agent_id, target_entity_id, intended_on_arrival_interaction, goal_type_driving_action) for the current or next tick. The MoveToTargetAction component might be removed or marked as completed.
      • If movement is blocked for a significant duration or an impassable obstacle is encountered: Schedule a MovementBlockedEvent(agent_id, reason) for the current or next tick. The agent might then need to re-plan.
    4. Scheduling of Self (Conceptual for Ongoing Movement): While not explicitly creating an event every tick for movement, the MovementSystem is part of the tick loop. The “event” aspect comes from the scheduling of the completion or failure of the entire movement action. The _scheduled_arrival_event_id in MoveToTargetAction could be used if the system pre-calculates the exact arrival tick.
  • Rendering Interpolation: For smooth visuals, the rendering system would interpolate the agent’s position between its Position at previous_tick_state and current_tick_state.

InteractionInitiationSystem

This system acts as a bridge between completing a movement (or other preparatory action) and starting the intended interaction.

  • Responsibility:
    • To listen for events indicating an agent is ready to perform a specific interaction (e.g., ArrivalAtTargetEvent).
    • To add the appropriate specific interaction component (e.g., PerformEatAction) to the agent based on the context of the preceding action.
  • Inputs:
    • ArrivalAtTargetEvent(agent_id, target_entity_id, intended_on_arrival_interaction, goal_type_driving_action)
    • Other events that might signify readiness for interaction (e.g., ToolEquippedEvent if a tool was needed).
  • Process:
    1. When an ArrivalAtTargetEvent is received for an agent_id:
    2. Based on intended_on_arrival_interaction:
      • If EAT_FOOD_INTERACTION: Add PerformEatAction(agent_id, food_target_id: target_entity_id) component.
      • If ATTACK_INTERACTION: Add PerformAttackAction(agent_id, enemy_target_id: target_entity_id) component.
      • If FLEE_INTERACTION_POINT_REACHED: (This might be a special case where “arrival” means “successfully fled to this spot” and might resolve the flee goal).
      • And so on for other interaction types.
  • Scheduling: This system processes events dispatched for the current tick.

Specific Interaction Execution Systems

(e.g., EatFoodSystem, FleeSystem, CombatSystem)

These are a suite of specialized systems, each responsible for executing a particular type of interaction.

  • Responsibility (Example: EatFoodSystem for PerformEatAction):
    • To execute the logic of the interaction (e.g., transferring sustenance from food to agent).
    • To determine the outcome of the interaction (success, failure, partial success).
    • To generate an InteractionOutcomeEvent detailing the result.
  • Inputs (Example: EatFoodSystem):
    • Agent entities with a PerformEatAction(food_target_id) component.
    • The agent’s SustenanceNeed component.
    • The FoodSource component on the food_target_id.
  • Process (Example: EatFoodSystem):
    1. Verify the food_target_id still exists and is a valid FoodSource.
    2. Access FoodSource.sustenance_value.
    3. Increase Agent.SustenanceNeed.current_value_at_last_update (and update its last_updated_tick). Trigger re-scheduling of need events for this agent (as per Section 2.1).
    4. (Optional: Decrease FoodSource.quantity, potentially destroying the FoodSource if depleted).
    5. Generate InteractionOutcomeEvent(agent_id, interaction_type: EAT, target_id: food_target_id, success: true, details: {satiation_gained: X, food_depleted: Y}).
    6. Remove the PerformEatAction component.
  • Scheduling: These systems process agents with their specific action components each tick, or an interaction might take multiple ticks and schedule its own completion event. For simplicity in this draft, we can assume many basic interactions (like eating a single item) complete within a single tick once initiated.

This Act Phase, with its chain of selection, movement, initiation, and execution, brings the agent’s decisions to life in the simulation. The InteractionOutcomeEvent is the critical output that feeds back into the Learn Phase.

Learn Phase Systems

The Learn Phase is where the agent processes the outcomes of its actions and significant environmental events, updating its internal state (Needs, Memories, Beliefs) based on these experiences. This feedback loop is crucial for adaptation and more nuanced future behavior.

GoalResolutionSystem This system is responsible for determining if an agent’s active goals have been met or should be abandoned based on recent events and state changes.

  • Responsibility:
    • To monitor InteractionOutcomeEvents and changes in Need levels.
    • To assess if the CurrentActiveGoal (and potentially other active goals) has been satisfied or is no longer viable/relevant.
    • To remove completed or obsolete Goal components from the agent.
  • Inputs:
    • InteractionOutcomeEvents (e.g., successful consumption of food for a SeekSustenanceGoal).
    • Notifications of significant Need level changes (e.g., if a need is fully satisfied).
    • The agent’s active Goal components and its CurrentActiveGoalFocus.
    • (Future: Events indicating a goal has become impossible, e.g., target destroyed).
  • Process:
    1. Listen for Relevant Events:
      • Upon receiving an InteractionOutcomeEvent for an agent (e.g., interaction_type: EAT, success: true):
        • Check if the agent’s CurrentActiveGoalFocus was related (e.g., SeekSustenanceGoal).
        • Query the relevant Need (e.g., SustenanceNeed). If the need is now above its goal_trigger_threshold (or a specific “satisfied” threshold), then the goal is considered achieved.
    2. Check Need-Driven Goals Directly:
      • Periodically, or when a Need is significantly satisfied, this system might check all need-driven goals. If the underlying Need for a Goal (e.g., SustenanceNeed for SeekSustenanceGoal) is no longer below its trigger threshold, the Goal component can be removed.
    3. Remove Goal Component: If a goal is deemed satisfied or obsolete:
      • The corresponding Goal component (e.g., SeekSustenanceGoal) is removed from the agent.
      • If the agent had a CurrentActiveGoalFocus component pointing to this goal, it is also removed or cleared.
      • This might generate an internal GoalCompletedEvent(agent_id, goal_type) or GoalAbandonedEvent, which could trigger other systems (e.g., a mood update system, or the GoalPrioritizationSystem to select a new CurrentActiveGoal if other goals are pending).
  • Scheduling: This system processes relevant events dispatched for the current tick and may also run periodically for agents to clean up goals whose underlying needs have been met through means not directly tied to a specific interaction outcome event.

BeliefUpdateSystem

(or part of MemorySystem/CognitiveSystem)

This system is responsible for the formation of new beliefs and the modification (reinforcement or challenging) of existing beliefs based on an agent’s experiences (as recorded in MemoryEntrys) and potentially other stimuli like communication.

  • Responsibility:
    • To analyze new MemoryEntrys in the AgentMemoryStore.
    • To identify patterns, correlations, and significant outcomes that could lead to new beliefs or adjustments to existing ones.
    • To update the agent’s AgentBeliefStore with these changes.
  • Inputs:
    • Newly created MemoryEntrys (signaled by events from MemoryRecordingSystem or by directly querying recent additions to AgentMemoryStore).
    • The agent’s existing AgentBeliefStore.
    • (Future: CommunicationReceivedEvent containing information/assertions from other agents).
  • Process (Conceptual - building on Section 3.3.2):
    1. Triggering Belief Review: This system might be triggered by:
      • The addition of a highly significant new MemoryEntry.
      • A periodic “reflection” event scheduled for the agent.
    2. Pattern Detection / Memory Analysis (The “Systems within Systems”):
      • Invokes various “Pattern Detector” sub-systems or rules that scan the AgentMemoryStore (focusing on recent/significant memories). Examples:
        • Consistent Outcome Detector: Looks for multiple memories where interacting with a specific subject_identifier (or TypeId) in a certain way (property related to an action) consistently leads to a particular outcome or emotional_valence.
        • Co-occurrence Detector: Identifies frequent spatial or temporal co-occurrence of certain entities or event types.
    3. Belief Modification/Formation:
      • Reinforcement: If a new memory’s outcome and details support an existing BeliefEntry, that belief’s strength and/or confidence are increased. The last_updated_tick is refreshed.
      • Challenge/Weakening: If a new memory contradicts an existing BeliefEntry, the belief’s strength and/or confidence may decrease.
      • New Belief Formation: If a strong pattern is detected in memories where no corresponding BeliefEntry exists, a new belief is created. Its initial strength, confidence, and value are derived from the characteristics of the supporting memories (e.g., consistency of pattern, average significance/outcome of memories). source_type would be DIRECT_EXPERIENCE.
    4. Hearsay Integration (Conceptual):
      • If processing a CommunicationReceivedEvent, a new BeliefEntry might be formed with source_type=HEARSAY. Its initial strength/confidence would be modulated by a belief about the informant’s trustworthiness. Subsequent direct experiences (memories) could then reinforce or challenge this hearsay-based belief.
    5. Conflict Management (Simplified for this Draft): For now, if a new strong belief directly contradicts an old weak one, the old one might be heavily weakened or overwritten. More complex conflict resolution (e.g., maintaining conflicting beliefs with different confidence levels) is a future refinement.
  • Scheduling: This system can be event-driven (reacting to new significant memories) and/or have parts that run as periodic scheduled tasks for each agent (e.g., “perform belief review cycle every N ticks”).

The Learn Phase closes the agent’s cognitive loop, allowing its internal model of the world and itself to evolve based on the consequences of its actions and observations. This iterative process is fundamental to the agent’s ability to exhibit adaptive and increasingly sophisticated behavior over time.