The behavior and decision-making of an agent are fundamentally driven by its internal state. This state is represented by a collection of specialized components attached to the agent’s entity. These components store information about the agent’s physiological and psychological condition, its memories of past events, and its beliefs about the world.

[NeedName]Need Components (e.g., SustenanceNeed)

Needs are the primary motivators for agent behavior, representing requirements for survival, well-being, or other intrinsic drives. Each distinct need is represented by its own component type. For this draft, we will use SustenanceNeed as the primary example.

Data Structure

A typical Need component, such as SustenanceNeed, would contain the following data:

  • current_value_at_last_update: float (Range: e.g., 0.0 to 1.0, where 1.0 indicates the need is fully satisfied, 0.0 indicates critical deficiency). This stores the need’s value as of the last_updated_tick.
  • last_updated_tick: int (The simulation tick at which current_value_at_last_update was accurately calculated and stored).
  • decay_rate_per_tick: float (The amount by which the need’s value decreases per simulation tick under normal conditions. This rate can be modified by agent activity, environment, or other factors).
  • goal_trigger_threshold: float (e.g., 0.4. If the need’s projected value drops below this, a corresponding goal, like SeekSustenanceGoal, is typically generated).
  • critical_threshold: float (e.g., 0.1. If the need’s projected value drops below this, more severe consequences may occur, such as health damage or incapacitation, and it may trigger higher priority goals).
  • _scheduled_goal_trigger_event_id: Option<EventID> (Stores the ID of the event currently scheduled with the Universal Scheduler for when this need is projected to cross the goal_trigger_threshold. None if no such event is currently scheduled or if the need is above the threshold).
  • _scheduled_critical_event_id: Option<EventID> (Stores the ID of the event currently scheduled for when this need is projected to cross the critical_threshold).

The actual current value of the need at any given current_tick can be interpolated as: interpolated_value = current_value_at_last_update - (decay_rate_per_tick * (current_tick - last_updated_tick)) This interpolation is used by systems that require an up-to-date need value before its next scheduled full update.

Interaction with Event Scheduler

Need components are dynamically managed via the Universal Scheduler to avoid constant polling:

  1. Initialization/Change: When a Need component is added to an agent, or when its state changes significantly (e.g., after eating, changing activity level which alters decay_rate_per_tick), its current value is calculated, and current_value_at_last_update and last_updated_tick are set.
  2. Projection & Scheduling:
    • The system then projects how many ticks it will take for the current_value to reach the goal_trigger_threshold and the critical_threshold based on the current decay_rate_per_tick.
    • If these projected times are in the future:
      • Any existing _scheduled_goal_trigger_event_id (and _scheduled_critical_event_id) for this need on this agent are first canceled with the Universal Scheduler.
      • New events (e.g., NeedThresholdReachedEvent(agent_id, need_type: SUSTENANCE, threshold_type: GOAL_TRIGGER)) are scheduled for the calculated future ticks. The IDs of these new scheduled events are stored in _scheduled_goal_trigger_event_id and _scheduled_critical_event_id.
  3. Event Handling: When a scheduled NeedThresholdReachedEvent fires for an agent, the NeedManagementSystem (detailed later) will process it. This typically involves updating the agent’s state (e.g., triggering goal generation if appropriate) and then re-evaluating and re-scheduling the next event for that need, if applicable (e.g., scheduling the critical threshold event if the goal trigger has just fired).

This scheduled approach ensures that need-related logic is only triggered when a need actually crosses a significant threshold, greatly improving simulation efficiency for many agents.

AgentMemoryStore Component

An agent’s ability to learn and adapt is fundamentally tied to its memory of past experiences. The AgentMemoryStore component serves as a centralized repository for an agent’s recollections of significant events. Unlike Need components, which typically have one instance per need type, an agent will usually have a single AgentMemoryStore component that internally manages a collection of individual memories.

Internal MemoryEntry Structure

The AgentMemoryStore contains a collection of MemoryEntry structures, each representing a distinct remembered event or experience. The structure of a MemoryEntry is designed to be flexible yet informative:

  • id: UniqueId (A unique identifier for this specific memory entry, potentially useful for linking beliefs to supporting memories or for debugging).
  • timestamp: int (The simulation tick at which the event occurred or was fully processed and recorded into memory).
  • event_type: Enum_EventType (Categorizes the memory, e.g., ATE_FOOD, SAW_ENTITY, WAS_ATTACKED, HEARD_SOUND, COMPLETED_TASK, RECEIVED_DAMAGE, INTERACTION_OUTCOME). This is crucial for querying and filtering memories.
  • location: Option<Vector3> (The geographical coordinates where the event took place, if applicable. None for non-spatial events).
  • involved_entities: List<{entity_id: EntityId, role: Enum_EntityRole_InEvent}> (A list of entities involved in the event and their roles. Roles might include TARGET, SOURCE, ALLY, ENEMY, FOOD_SOURCE_CONSUMED, TOOL_USED, etc.). This allows memories to be contextually linked to other entities.
  • outcome: Option<Enum_EventOutcome> (Describes the result of the event for the agent, e.g., POSITIVE_HIGH, POSITIVE_MODERATE, NEUTRAL, NEGATIVE_MODERATE, NEGATIVE_HIGH. This is key for learning and reinforcement).
  • significance: float (A normalized value, e.g., 0.0 to 1.0, indicating the subjective importance or salience of this memory to the agent. Higher significance might make a memory persist longer or have a stronger influence on belief formation).
  • emotional_valence: Option<float> (A value, e.g., -1.0 (very negative) to 1.0 (very positive), representing the emotional tinge associated with the memory. This can influence mood and future appraisals).
  • details: Map<String, AnyType> (A flexible key-value store for event-specific data not covered by other fields. For example, for an ATE_FOOD event: {"food_type_id": "RedBerry", "satiation_gained_value": 0.5, "was_poisonous": false}).

The AgentMemoryStore component itself might internally store these MemoryEntry structures in a list, perhaps with a fixed capacity, or use more sophisticated data structures for efficient querying if memory recall becomes a performance bottleneck.

// Component: AgentMemoryStore
// --- Internal Data ---
// recent_memory_entries: List<MemoryEntry> // Example internal storage

Management (Recent & Significant, Summarization - Conceptual)

To prevent unbounded memory growth and to simulate realistic cognitive limitations, the AgentMemoryStore requires management:

  • Recency and Significance: The store will typically prioritize keeping memories that are either very recent or have a high significance score. A common strategy is to maintain a capacity-limited list (e.g., the last N significant events). When new memories are added and capacity is exceeded, the oldest or least significant memories might be discarded or “archived.”
  • Significance Decay/Update: The significance of a memory might decay over time if not reinforced by related experiences or reflection. Conversely, recalling a memory or experiencing a similar event might boost its significance.
  • Memory Pruning/Forgetting: A dedicated MemorySystem (detailed later) would be responsible for these pruning and significance update processes, possibly as a periodic scheduled task for the agent.
  • Conceptual: Memory Summarization/Abstraction: For long-term learning and efficiency, the MemorySystem might conceptually engage in summarization. Multiple similar, low-significance memories (e.g., ten separate instances of successfully eating a common berry with minor positive outcomes) could eventually be abstracted or consolidated. This might lead to the strengthening of a related Belief (e.g., “Common berries are a reliable food source”) and the eventual pruning of the now-redundant individual memory entries. This keeps the active memory store focused on impactful and informative recollections.

The AgentMemoryStore provides the raw experiential data that fuels an agent’s learning, belief formation, and nuanced decision-making in the ActionAppraisalSystem.

AgentBeliefStore Component

Beliefs represent an agent’s generalized knowledge, assumptions, and interpretations about the world, its entities, and its underlying principles. They are typically formed from patterns in memory, direct instruction, or innate predispositions, and they play a crucial role in shaping an agent’s goals, plans, and appraisals of potential actions. Like the AgentMemoryStore, an agent typically possesses a single AgentBeliefStore component that manages a collection of individual BeliefEntry structures.

Internal BeliefEntry Structure

Each BeliefEntry encapsulates a single piece of generalized knowledge or a conviction held by the agent.

  • id: UniqueId (A unique identifier for this belief entry).
  • subject_identifier: Union<EntityId, TypeId, ConceptId> (Defines what the belief is about. This could be a specific instance of an entity (e.g., PlayerCharacter_Alice), a type or category of entity/object (e.g., WolfPredator_Type, RedBerryBush_Type), or an abstract concept (e.g., Concept_Trustworthiness, Concept_Betrayal)).
  • property: Enum_BeliefProperty (Specifies the aspect of the subject_identifier that the belief pertains to, e.g., IS_SAFE, IS_EDIBLE, IS_HOSTILE, IS_RELIABLE_INFO_SOURCE, HAS_HIGH_SATIATION_VALUE, LEADS_TO_DANGER).
  • value: AnyType (The actual content or assertion of the belief. This can be a boolean (e.g., for IS_EDIBLE: true), a float (e.g., for a belief about “Likelihood of Attack”: 0.75), an enum (e.g., for “FactionStanding”: FRIENDLY), or even a reference to another ConceptId. The interpretation of value is context-dependent based on the property).
  • strength: float (Normalized 0.0 to 1.0. Represents how strongly or deeply ingrained this belief is. High strength beliefs are more resistant to change, even in the face of conflicting evidence).
  • confidence: float (Normalized 0.0 to 1.0. Represents the agent’s certainty in the accuracy or truthfulness of this belief. A belief can be strongly held (high strength) but have low confidence if, for example, it’s based on old or questionable information).
  • last_updated_tick: int (The simulation tick when this belief was last formed, reinforced, or significantly challenged).
  • source_type: Enum_BeliefSource (Indicates the origin of the belief, e.g., DIRECT_EXPERIENCE (from memory patterns), HEARSAY (told by another agent), DEDUCTION (inferred from other beliefs), INNATE (part of initial agent state), OBSERVATION_OF_OTHERS).
  • _supporting_memory_ids: Option<List<MemoryEntry.id>> (Conceptually, for future refinement: a list of IDs from AgentMemoryStore that provide evidence for this belief).
  • _conflicting_memory_ids: Option<List<MemoryEntry.id>> (Conceptually, for future refinement: memory IDs that contradict this belief).

The AgentBeliefStore component itself would contain a list or other suitable collection of these BeliefEntry structures.

// Component: AgentBeliefStore
// --- Internal Data ---
// current_beliefs: List<BeliefEntry> // Example internal storage

The interplay between value, strength, and confidence is key. For example, an agent might have: BeliefEntry(subject_type=ShadowyFigure_Type, property=IS_DANGEROUS, value=true, strength=0.9, confidence=0.5). This means the agent strongly believes (strength 0.9) that shadowy figures are dangerous, perhaps due to an innate fear or a single impactful but unverified story (low confidence 0.5). This would likely lead to cautious behavior (high strength dictates action tendency), but the low confidence might make the agent more receptive to new evidence that could alter this belief compared to a belief with 0.9 confidence.

Management and Formation (Conceptual)

Managing the AgentBeliefStore involves several conceptual processes, likely handled by a BeliefSystem or a broader CognitiveSystem. For this draft, the focus is on laying the groundwork:

  • Formation Focus for this Draft:
    • From Memory Patterns (Primary): The core mechanism envisioned involves one or more “Pattern Detector” sub-systems analyzing an agent’s AgentMemoryStore for recurring patterns, associations, and outcomes. Consistent correlations (e.g., “every time I ate RedBerries, my SustenanceNeed increased significantly and outcome was POSITIVE”) can lead to the formation or reinforcement of BeliefEntrys (e.g., Belief(subject_type=RedBerry_Type, property=HAS_HIGH_SATIATION_VALUE, value=true)). This process may involve thresholds (e.g., pattern observed N times) to form a new belief, with initial strength and confidence based on the consistency and significance of supporting memories.
    • From Hearsay/Communication (Secondary): When an agent receives information from another agent, it might form a new BeliefEntry with source_type=HEARSAY. The initial strength and confidence of such beliefs would ideally depend on the perceived trustworthiness of the informant (which itself could be represented by another belief).
  • Reinforcement/Challenge Focus for this Draft:
    • When new MemoryEntrys are recorded that align with an existing belief, that belief’s strength and/or confidence may increase.
    • Memories conflicting with an existing belief may decrease its strength and/or confidence. Significant or repeated contradictions might lead to the belief being heavily weakened, or (conceptually for future refinement) the formation of a new, competing belief.
  • Future Considerations (Explicitly Deferred for this Draft):
    • Advanced deduction/inference (complex logical chains from existing beliefs).
    • Robust conflict resolution between diametrically opposed strong beliefs.
    • Belief decay mechanisms (beliefs weakening over time if not reinforced).

The AgentBeliefStore, even with these foundational formation/update mechanisms, provides a dynamic knowledge base that heavily influences how an agent interprets its perceptions and appraises its options. The “Pattern Detector” concept allows for modular and extensible learning capabilities.

Agent Initial State

To ensure agents can exhibit meaningful behavior from their inception within the simulation, and to provide a concrete basis for demonstrating the cognitive loop, each agent begins with a defined initial state. This state includes starting levels for its needs, a foundational set of beliefs, and potentially some key initial memories. This initial configuration can be thought of as an “Agent Archetype” or template, which in the broader context of ATET, might be influenced by an Incarnation’s nature, inherited Eidos, or the Tapestry’s characteristics.

For the purpose of this document and the illustrative “toy example” in Section 5, we will define a simple initial state.

Example Pre-seeded Needs, Beliefs, and Memories

Let’s assume our example agent is a small, herbivorous creature. Its initial state upon creation at simulation_tick = 0 might be:

  • Initial Need Levels:

    • SustenanceNeed:
      • current_value_at_last_update: 0.7 (Partially satiated, but will need food eventually)
      • last_updated_tick: 0
      • decay_rate_per_tick: (A default value, e.g., 0.0001)
      • goal_trigger_threshold: 0.4
      • critical_threshold: 0.1
      • _scheduled_goal_trigger_event_id: (Calculated and set based on above values)
      • _scheduled_critical_event_id: (Calculated and set)
    • (Other needs like SafetyNeed, RestNeed could also be initialized here if they were part of the agent’s component set).
  • Pre-seeded BeliefEntrys in AgentBeliefStore: These represent innate knowledge or very basic, pre-learned understanding crucial for immediate survival or interaction.

    1. BeliefEntry (Edible Food Source):
      • id: (Unique)
      • subject_identifier: TypeId("RedBerryBush_Type")
      • property: IS_EDIBLE
      • value: true
      • strength: 0.6 (Moderately strong belief)
      • confidence: 0.5 (Moderately confident, open to experience)
      • last_updated_tick: 0
      • source_type: INNATE
    2. BeliefEntry (Known Food Property):
      • id: (Unique)
      • subject_identifier: TypeId("RedBerryBush_Type")
      • property: HAS_MODERATE_SATIATION_VALUE
      • value: true (Could also be a float representing expected value)
      • strength: 0.5
      • confidence: 0.4
      • last_updated_tick: 0
      • source_type: INNATE
    3. BeliefEntry (Known Threat):
      • id: (Unique)
      • subject_identifier: TypeId("WolfPredator_Type")
      • property: IS_DANGEROUS
      • value: true
      • strength: 0.8 (Strong innate caution)
      • confidence: 0.7 (Reasonably confident in this danger)
      • last_updated_tick: 0
      • source_type: INNATE
  • Pre-seeded MemoryEntrys in AgentMemoryStore (Optional for this example): For a truly “newborn” equivalent agent archetype, the AgentMemoryStore might start empty, with all initial understanding encoded purely as INNATE beliefs. Alternatively, to illustrate how early experiences shape beliefs if not innate, one could include:

    • Example Conceptual Initial Memory (if not using innate belief for edibility):
      • MemoryEntry(id: ..., timestamp: -1000 [simulating a past event], event_type: ATE_FOOD, location: ..., involved_entities: [{entity_id: SomeRedBerryBushInstance, role: FOOD_SOURCE_CONSUMED}], outcome: POSITIVE_MODERATE, significance: 0.4, details: {"food_type_id": "RedBerryBush_Type"}) For our current toy example walkthrough in Section 5, relying on the INNATE beliefs defined above will be sufficient to bootstrap behavior without needing to pre-populate many memories.

This initial state provides the agent with immediate, albeit simple, biases and knowledge to begin interacting with the world. Its Needs will drive it, its Beliefs will guide its appraisal of options, and new Memory from its actions will allow it to learn and refine these initial understandings. The process of populating these components occurs when the agent entity is first instantiated in the simulation.