The Technical Depot details foundational architecture concepts for creating intelligent, autonomous agents within the universe of ATET (ATET). The primary goal is to outline a system capable of simulating agents that are:

  • Reactive: Able to perceive their environment and respond to changes and events in a timely and contextually appropriate manner.
  • Goal-Driven: Motivated by internal needs and capable of forming and pursuing goals to satisfy those needs.
  • Learning (Rudimentary): Able to form simple memories of events and their outcomes, and gradually develop beliefs that shape their future behavior and decision-making.

This draft focuses on a “vertical slice” of agent intelligence, establishing the core components, systems, and data flow for a single, relatively simple agent. This provides a robust, extensible framework upon which more complex behaviors, needs, social interactions, and the nuanced philosophical aspects of ATET’s identity dynamics can be layered in future development stages. The principles established here aim to serve as the bedrock for the rich, emergent narratives envisioned for the game.

Engine Overview

  • Highly data-oriented ECS architecture
  • Scheduled and interruption-based event systems
  • Hard simulation tick rate (60 T/s target)
  • Locally deterministic simulation
  • Spacial hierarchy (parentchild relations) for all entities
  • Spacial threading of systems (based on POIs)
  • Universal Scheduler overseeing spacial threading & message passing between POIs
  • AI actors defined by multiple components and systems
  • AI director overseeing entities representing story elements

Alignment with ECS and Scheduled/Event-Driven Architecture

The agent simulation detailed herein is designed with a strong affinity for an Entity Component System (ECS) architecture. This approach promotes:

  • Data-Oriented Design: Separating agent state (data in components) from behavior (logic in systems) for clarity, performance, and modularity.
  • Composability: Agents are defined by the collection of components they possess, allowing for diverse agent types and capabilities.
  • Efficient Iteration: Systems operate on tightly packed arrays of relevant components.

Furthermore, this design heavily leverages a Scheduled/Event-Driven Architecture. This means:

  • Efficiency at Scale: Instead of polling every agent for every possible state change every tick, agents schedule self-updates for significant future events (e.g., a need reaching a critical threshold, the planned completion of an action).
  • Reactivity through Interrupts: External events or significant changes in the environment can interrupt an agent’s current plan or scheduled updates, forcing re-evaluation and adaptation.
  • Decoupled Systems: Systems often communicate and trigger subsequent logic via events, rather than direct function calls, promoting modularity and cleaner dependencies.
  • Support for Distributed Simulation: This model is conducive to the broader architectural goal of a large, potentially distributed simulation where different regions or “Points of Interest” (POIs) might operate with a degree of autonomy, communicating via messages (events).

The “hard tick rate” philosophy for the overall simulation ensures a consistent temporal baseline against which all scheduled events and system updates are processed.

Overview of the Agent’s Cognitive Loop

The core behavior of an agent emerges from a continuous cognitive loop, which can be broadly categorized into the following phases. These phases are not always strictly sequential for all aspects of an agent’s cognition but represent the general flow of information and decision-making:

  1. Sense: The agent perceives its immediate environment (e.g., entities, objects, threats, resources) and retrieves relevant information from its internal AgentMemoryStore and AgentBeliefStore. This phase updates its internal representation of the current world state.
  2. Motivate: Internal Needs (e.g., sustenance, safety) are assessed. If needs are unmet or critical thresholds are approached, Goals are generated or existing ones are prioritized. This phase determines what the agent wants to achieve.
  3. Plan: For the currently selected Goal, the agent derives a set of PotentialActions that could lead to its satisfaction. This involves considering available knowledge about the environment and its own capabilities.
  4. Appraise: Each PotentialAction is evaluated based on a variety of criteria, including expected benefit, associated risks, costs (time, energy), likelihood of success, and consistency with the agent’s Beliefs and past Memory. This “opinion forming” step results in an appraisal score for each action.
  5. Act: The agent selects the PotentialAction with the most favorable appraisal and commits to it. This involves instantiating concrete action components (e.g., MoveToTargetAction, PerformEatAction) which are then processed by dedicated action execution systems over one or more ticks.
  6. Learn: The outcome of the executed action (success, failure, unexpected consequences) is processed. This new experience is recorded in the AgentMemoryStore and can lead to the formation of new Beliefs or the reinforcement/modification of existing ones in the AgentBeliefStore. Needs are updated, and goals may be resolved or re-evaluated.

This Sense-Motivate-Plan-Appraise-Act-Learn cycle drives the agent’s moment-to-moment behavior, allowing it to adapt to changing circumstances and (over time) exhibit more nuanced and seemingly intelligent responses based on its accumulated experiences.


Before detailing the agent-specific components and systems, it’s essential to outline the global simulation constructs and environmental factors that provide the context for agent behavior. These systems are typically managed at a higher level than individual agents but directly impact how agents perceive, interact, and schedule their operations.

Simulation Tick Rate and Event Scheduling

The entire simulation operates on a hard simulation tick rate (target: e.g., 60 Ticks Per Second - TPS). This provides a discrete, consistent heartbeat for all state changes and event processing. All durations, decay rates, and scheduled events within the simulation are ultimately measured in these ticks.

Role of the Universal Top-Level Scheduler

A global, universal scheduler is responsible for managing and dispatching all timed events within the simulation. This system maintains a time-ordered queue of events. Each tick, it:

  • Identifies all events scheduled to fire on the current tick.
  • Dispatches these events to the relevant systems or entities.
  • Handles the registration of new events generated by systems or agents, placing them into the queue according to their designated future tick. This scheduler is critical for the event-driven nature of the agent architecture, enabling deferred processing, scheduled self-updates for agents, and the resolution of actions that span multiple ticks.

For distributed simulations operating across different logical threads or “Points of Interest” (POIs), maintaining determinism during inter-POI communication is paramount. If POI A (on Thread 1) sends a message/event intended for POI B (on Thread 2) that should be processed in the same simulation tick, complex locking or synchronization would be needed to ensure a deterministic order if multiple such messages arrive simultaneously from various sources. To simplify this and guarantee a deterministic outcome, inter-POI messages/events are typically buffered and processed with a minimum 1-tick delay. This means a message sent from POI A during tick T will be queued and made available for processing by POI B at the beginning of tick T+1 (or later, if the message itself specifies a future delivery tick). This delay allows all messages destined for a POI for tick T+1 to be collected, sorted by a deterministic rule (e.g., source POI ID, message type, priority), and then processed in that fixed order, thus preventing race conditions and ensuring deterministic state changes across the distributed simulation. The top-level scheduler may coordinate with or delegate to regional schedulers to manage these message queues and enforce this ordered processing.

Agent-Specific Scheduled Events

Agents heavily rely on the Universal Scheduler to manage their internal state progression and action timing without requiring constant polling. Examples include:

  • Need Threshold Events: When a Need component (e.g., SustenanceNeed) projects that its value will cross a critical or goal-triggering threshold at a future tick, an event is scheduled for that agent at that specific tick.
  • Action Completion Events: Actions that take time (e.g., movement, crafting, performing an interaction) will schedule an event for their anticipated completion (or failure/blockage) tick.
  • Reflection/Maintenance Events: Agents might schedule periodic events for cognitive processes like memory consolidation or belief review.

Spatial Hashing & Perception Basics

Efficiently determining what an agent can perceive is crucial. This is facilitated by a global spatial indexing system.

Position Component

All entities that exist in the game world and can be perceived or interact spatially (including agents, food sources, threats, environmental features) possess a Position component:

  • x: float
  • y: float
  • z: float (if 3D) This component is updated by movement systems and is read by the spatial hashing system.

Implicit/Explicit SensoryCapabilities

Agents possess sensory capabilities that define what they can perceive from their environment. For this initial draft, this might be:

  • Implicit: A fixed sensory radius or cone definition used by perception systems when processing a specific agent.
  • Explicit (Future): A dedicated SensoryCapabilities component on the agent, detailing ranges, fields of view, and sensitivities for different senses (sight, hearing, smell), potentially also including states like “actively searching” which might modify these parameters. Perception systems use an agent’s Position and its SensoryCapabilities to query the spatial hash.

A SpatialHashUpdateSystem (or equivalent, e.g., Quadtree/Octree manager) runs early in each tick to update the spatial index with the current positions of all relevant entities, making proximity and range queries efficient for perception and other systems.

Environmental Entities & Components (Illustrative)

The world is populated by various entities that agents can interact with or perceive. These entities are defined by their own set of components. For the purpose of our micro-level agent simulation focusing on sustenance and basic reactivity, we’ll consider:

FoodSource Component Attached to entities that can provide sustenance.

  • sustenance_value: float (How much it replenishes an agent’s SustenanceNeed)
  • (Also possesses a Position component)
  • (Optional: quantity, requires_interaction_type)

ThreatSource Component (Illustrative) Attached to entities or environmental features that pose a danger to agents.

  • danger_level: float (An abstract measure of the threat)
  • threat_type: Enum_ThreatType (e.g., PREDATOR, ENVIRONMENTAL_HAZARD, HOSTILE_AGENT)
  • (Also possesses a Position component)
  • (Optional: effective_range, attack_capabilities) The presence of a ThreatSource would typically trigger safety-oriented goals and behaviors in perceiving agents.

Awakening Lifecycle

  1. Seeding

    • Agent receives AwakeningSeed at instantiation
    • Seed may be created manually (designer) or dynamically (Director, worldgen, reincarnation)
  2. Monitoring Phase

    • Agent’s experiences checked against:
      • SymbolTag matches
      • Memory reactivation triggers
      • MythicPattern callbacks
    • Tension is accumulated if partial match is detected
  3. Scoring

    • Identity Affinity is calculated:
      fn compute_affinity(agent: &AgentState, seed: &AwakeningSeed) -> f32
    • Takes into account:
      • Tag overlap
      • Emotional weight of matching memory triggers
      • Culture/region resonance
  4. Resolution

    • If affinity ≥ threshold:
      • Awakening occurs
      • Identity reconstructed
      • Drifted tags interpolated
    • If partial:
      • Agent enters liminal state (false identity, fragmented selfhood)
    • If failure:
      • Awakening decays
      • Mythic resonance may transfer elsewhere

Awakening Effects

  • Recovered memory fragments
  • Belief and goal inheritance
  • Trait or tag realignment
  • Myth pattern participation
  • Director tracking of awakened archetype

Identity Mutation Logic

fn resolve_identity_drift(seed: &AwakeningSeed, local_tags: &[SymbolTagId]) -> Vec<SymbolTagId>
  • Drift introduces narrative ambiguity
  • Ghost tags may mutate to fit cultural context
  • E.g., a war hero may reincarnate as a martyr, rebel, or tyrant depending on symbolic pressure

System Modes Integration

ModeSeeding SourceAwakening TypeNotes
ManualUnboundDesignerInstantFully scripted legacy agents
ManualBoundPlayer legacyConditionalRequires player-accumulated Eidos
ProceduralOriginalWorldgenEmergentPrimary simulation loop
ProceduralDescendantDirector echoDriftedSymbolic dynasties
CrossTapestryTransferCrossworld resonanceDistortedExperimental / late game
DirectorCatalyzedStory stress reliefForcedInjected myths
FalseIdentityPartial resonanceIncompletePotentially tragic arcs
CorruptedResonanceMyth conflictScrambledDual-symbolic or madness outcomes
DelayedEmergenceAccumulated matchRetroactivePlayer-driven self-discovery

Summary

The Awakening System allows identities to emerge not just from code, but from context. Whether designed, drifted, or discovered, identity is a lived symbolic recursion; resolved not at birth, but in the act of remembering who one is.


Crystallization of Symbolic Experience

Purpose

To define how memories and beliefs resolve into narrative meaning, Eidos Fragments; how they persist across Threads, shape identity, and fuel player interpretation. Eidos represents crystallized symbolic motifs drawn from emotionally and thematically significant experience.

Design Philosophy

  • Symbolic, Not Specific: Eidos is not a literal memory or belief but an emergent essence abstracted from them.
  • Deterministic and Efficient: Evaluation occurs in defined states (e.g. death, ritual, dream) and uses pattern-matching pipelines.
  • Player-Interpretive: Eidos is interpretable, collectible, and reflective of gameplay history.
  • Persistent Across Threads: Eidos transcends a single Incarnation and informs long-term identity and story themes.

Crystallization Pipeline

fn evaluate_eidos_candidates(agent: &AgentState) -> Vec<EidosFragment>
  1. Preselection:

    • Find recurring SymbolTag clusters in memories and beliefs
    • Filter by intensity: tag valence, emotional weight, recurrence
  2. Pattern Resolution:

    • Apply symbolic patterns from EidosPatternTable
    • Score matches for thematic coherence and symbolic tension
  3. Crystallization:

    • Emit EidosFragment if match score > threshold
    • Record provenance, formation method, and source links

Example Eidos Patterns

EidosPattern {
    id: uuid!("e001"),
    name: "Cycle of Betrayal and Redemption",
    required_tags: vec!["betrayal", "sacrifice", "remembrance"],
    forbidden_tags: vec!["chaos"],
    valence_condition: ValenceTone::Redemptive,
    result_name: "Ashes Forgiven",
}
EidosPattern {
    id: uuid!("e007"),
    name: "Shattered Self Ascends",
    required_tags: vec!["fracture", "ascension", "silence"],
    valence_condition: ValenceTone::Transcendent,
    result_name: "The Mask Beyond the Mirror",
}

Integration Points

SystemRole of Eidos
Memory SystemSource for emotional-symbolic material
Belief SystemMay reinforce or resist Eidos formation
Player UIPostmortem presentation and meta-progression
Incarnation LoopCarries prior Eidos fragments into new Threads
Quest GeneratorUses Eidos as seeds for higher-order motifs
Culture GeneratorEidos informs myth structure and societal memory
Director AIGuides recurrence, echo, and symbolic tension

Notes

  • Multiple Eidos fragments can form from a single life.
  • Fragments may evolve or combine across generations.
  • Players may selectively retain, name, or discard Eidos between Threads.
  • Some Eidos only form if specific conditions are met (e.g. conscious reflection, ritual context, inherited pattern match).

This system encodes enduring symbolic meaning at a galactic scale, grounding reincarnation and emergent myth in player and agent experience.


Symbolic Experience Encoding

Purpose

To model, store, distort, and propagate memory as symbolic, emotionally-charged fragments. Memories are not logs of events but meaning-bearing residues shaped by Faith, Fiction, and experience. This system underpins perception, belief, emergent narrative, and intergenerational Eidos inheritance.

Core Concepts

  • Memory is Subjective: Memories are not factual records, but reconstructions shaped by internal states, symbolic frameworks, and emotional weight.
  • Memories are Symbolically Tagged: All memory entries are attached to SymbolTags, representing their narrative or thematic resonance.
  • Distortion and Forgetting Are Systemic: Memory may be altered or lost over time, according to pressure from trauma, belief, ritual, or metaphysical entropy.
  • Memory Forms the Backbone of Eidos Accrual: Stored and interpreted memories are the primary source of meaningful Eidos fragments.

Data Structures

struct MemoryEvent {
    id: Uuid,
    timestamp: u64,
    subject: EntityId,
    object: Option<EntityId>,
    location: Option<WorldPosition>,
    tags: Vec<SymbolTagId>,
    valence: EmotionalValence,
    distortion: Option<DistortionProfile>,
    visibility: MemoryVisibility,
    origin: MemoryOrigin,
    notes: Option<String>,
}
enum MemoryOrigin {
    DirectExperience,
    Dream,
    Inherited(ThreadId),
    FictionalNarrative,
    RitualVision,
}
enum MemoryVisibility {
    Conscious,
    Subconscious,
    Forgotten,
}
struct DistortionProfile {
    skewed_tags: Vec<SymbolTagId>,
    altered_valence: Option<EmotionalValence>,
    source: DistortionSource,
}
enum DistortionSource {
    BeliefFilter,
    TraumaResponse,
    CulturalRepression,
    HallucinatoryState,
    MemoryCorruption,
}

Emotional Valence

MemoryEvents are ranked along a continuous emotional scale:

enum EmotionalValence {
    Despair(f32),
    Fear(f32),
    Shame(f32),
    Anger(f32),
    Curiosity(f32),
    Joy(f32),
    Awe(f32),
    Love(f32),
}

Valence can:

  • Affect recall probability
  • Shape memory distortion
  • Influence Eidos flavor

System Functions

  • StoreMemoryEvent(event: MemoryEvent)
    Insert a new symbolic experience into an agent’s memory bank.

  • RecallMemory(agent: EntityId, filter: MemoryQuery)
    Retrieve accessible memories matching symbolic, emotional, or historical criteria.

  • DistortMemory(agent: EntityId, event: MemoryEvent, cause: DistortionSource)
    Transform memory according to belief, trauma, or other interpretive frameworks.

  • ForgetMemory(agent: EntityId, event: MemoryEvent)
    Mark memory as forgotten; optionally propagate symbolic drift into subconscious.

  • PropagateMemory(thread: ThreadId, memory: MemoryEvent)
    Recast significant events as inherited or mythologized memory in new Incarnations.

Gameplay Integration

SystemRole of Memory
Subjective InterfaceDetermines what is perceived and how it is framed
Belief SystemBeliefs form by reconciling recurring memory patterns
Quest SystemQuests arise from tensions rooted in remembered events
Eidos SystemEidos fragments are drawn from emotionally charged memory
Director AIRecognizes symbolic recurrences and injects resonance
Dialogue & RitualNPCs may reference shared or contested memories

Notes

  • Memories are not truth; they are narrative vectors.
  • Repeated recall of the same memory may change its SymbolTags or valence.
  • Some memories will only become visible or interpretable after death, during reflection.
  • Forgotten memories may re-emerge in distorted form in dreams or visions.

Director Mechanics

Purpose

The Director is not a storyteller but a resonance-oriented meta-agent that observes, amplifies, and challenges symbolic dynamics across the simulation. Its primary role is to tune the galaxy’s symbolic tensions and stillness without authorial control, ensuring long-form thematic cohesion and narrative vitality.

Core Responsibilities

  • Monitor global SymbolTag activity: tension, harmony, suppression, saturation
  • Detect symbolic dissonance or stagnation
  • Amplify unresolved or overlooked motifs
  • Inject thematic contrast only where necessary for narrative vitality
  • Preserve and challenge harmony through gravitational narrative logic

Data Structures

Symbolic Monitoring

  • Run periodically or during narrative “beats”
  • Evaluate regions and agents for:
    • Tag density and balance
    • Tag conflict co-occurrence
    • Eidos frequency and drift
    • Belief stagnation

Stillness Threshold System

fn calculate_stillness(region: &RegionSymbolicProfile) -> f32
  • Accrues over time when:
    • Tag diversity decreases
    • Contradictions resolve
    • Belief/Eidos saturation aligns
  • When Stillness exceeds threshold:
    • Region becomes a StillnessAnchor
    • Marked as a mythogenic attractor
    • Interventions are deferred or symbolic in nature

Symbolic Tension Pressure

fn detect_tension(region: &RegionSymbolicProfile) -> Option<TensionSignal>
  • Raised when:

    • High-conflict tag pairs persist without resolution
    • Belief contradiction count exceeds tolerance
    • Eidos formation fails due to incoherence
  • May prompt resonance amplification interventions

Intervention Rules

ModeBehaviorExample
PassiveObservation and motif registration onlyNo intervention
ReflectiveIntroduce echo motifs or symbolic inversesA dream reflects a prior Eidos
GravitationalTension flows toward Stillness AnchorsQuests or events naturally cohere
DisruptiveOnly in stagnation crisisA child is born with memory of fire

Integration

SystemDirector Role
MemorySeeds new symbolic motifs as dreams or traumas
BeliefObserves stasis, tension, drift
CultureSteers long-term myth evolution
EidosAmplifies overlooked or echoing motifs
IncarnationAlters seed belief/memory based on tension context
Quest SystemEnsures symbolic continuity and contradiction
Player LoopResponds to player-selected Eidos for resonance

Emergent Effects

  • High-stillness regions attract:
    • Pilgrimage
    • Mythic recurrence
    • Symbolic amplification
  • High-tension regions become:
    • Epicenters of conflict
    • Seed beds for Eidos formation
  • Player attempts to harmonize everything may invoke:
    • Narrative saturation
    • Refracted echoes or symbolic rupture

Summary

The Director tunes the world’s symbolic shape without imposing plot. Its job is not to ensure conflict, but to ensure symbolic response. Harmony, dissonance, silence, and recurrence are all narrative states; the Director simply keeps them in motion.


Procedural generation of subjective mythos

Purpose

The Mythogenesis System is responsible for detecting, structuring, and propagating Mythic Patterns that emerge from symbolic experiences across quests, memory, belief, and Director resonance. Myths are crystallized Eidos constellations that influence culture, rituals, future worldgen, and narrative recurrence.

Mythogenesis Lifecycle

  1. Seed Detection

    • System listens for events (quest resolution, memory clusters, belief updates)
    • Detects high symbolic density and emotional resonance
  2. Symbolic Clustering

    • Correlates SymbolTags and Eidos fragments into candidate constellations
    • Checks for narrative closure and interpretability
  3. Pattern Recognition

    • Creates MythicPattern instance if:
      • Emotional weight > threshold
      • Multiple agents or regions propagate memory fragments
      • Director confirms resonance potential
  4. Myth Formation

    • Pattern stored and linked to originating region and event
    • Available for adoption by cultures, Director amplification, or future seeds
  5. Cultural Adoption

    • Myth may enter belief structures, ritual frameworks, or world explanations
    • Tracked in cultural_anchors field
  6. Recurrence and Mutation

    • Director monitors for symbolic echoes
    • Myths mutate when tags drift, Eidos are reinterpreted, or cultures diverge

Propagation Channels

ChannelMechanismResult
Director EchoAmplifies myth in new regionMyth adopted or refracted
Agent MemoryPassed between IncarnationsEchoed or deformed through retelling
Ritual SystemsEncoded as symbolic sequencesFixes interpretation in time
QuestsUsed as template or symbolic resonanceRepetition with variation
Culture DriftAlters tag affinity around mythsMeaning shift over generations

Myth Interactions

  • Conflict: Opposing myths may create cultural schisms
  • Synthesis: Two myths may merge if symbolic cores align
  • Forgetting: Unadopted or unreinforced myths decay over time
  • Resurrection: Director may revive dormant myths when tags re-emerge

Integration Points

SystemRole in Mythogenesis
QuestSource of closure and narrative arcs
MemoryEmotional + symbolic data for clustering
BeliefStructural reinforcement for patterns
Director AIRecognizes potential myths and amplifies
CultureStores, mutates, and preserves myths
World GenerationUses myths as narrative seed data
Incarnation SeedingImbues new agents with mythic resonance

Summary

The Mythogenesis system gives Anamnesis its memory. It identifies meaningful constellations in symbolic space and condenses them into persistent cultural myths. These myths in turn shape future emergence, providing deep, thematic recurrence without the need for authored narrative.


Belief System Functions

  • FormBelief(agent, tags, origin)
    Constructs a new belief if enough reinforcing memory or influence exists.

  • EvaluateBeliefConflict(agent)
    Updates coherence and dissonance scores based on internal contradiction or memory friction.

  • InfluenceBelief(source, target, belief, method)
    Applies social or ritual pressure to update or seed belief in another agent.

  • PropagateBelief(culture, belief)
    Codifies a personal belief into a wider cultural or mythic context.

  • EvolveBelief(belief)
    Gradually mutates or drifts belief tags under stress, doubt, or symbolic saturation.

Belief Salience and Valence

Beliefs are weighted by:

  • Confidence: subjective strength of acceptance
  • Recurrence: how often supporting tags appear in memory
  • Consequence: emotional valence and symbolic centrality

These values impact:

  • Memory distortion and dream content
  • Agent decisions and quest interpretation
  • Resistance to social persuasion or doubt

Integration Points

SystemRole of Belief
MemoryBeliefs filter and distort recall
Quest GenerationBeliefs highlight, obscure, or mutate quest types
Faith SystemBelief sets may crystallize into coherent Faiths
Director AIDetects belief tensions and injects narrative force
DialogueNPCs assert, defend, or challenge beliefs
Eidos FormationBelief-tagged memory fragments yield flavored Eidos

Cognitive Dissonance Model

A dynamic score of internal contradiction:

fn calculate_dissonance(belief_set: &BeliefSet, memory: &[MemoryEvent]) -> f32

High dissonance may:

  • Trigger belief mutation or inversion
  • Suppress or distort memories
  • Lead to symbolic breakdown or psychic event
  • Enable spiritual transformation (e.g., ascension, fall)

Notes

  • Beliefs are mutable but sticky; mutation is more likely under extreme stress or ritual exposure.
  • Faiths encode beliefs with social and cultural force, shaping generations of Incarnations.
  • Tag-level mutation in belief systems leads to myth drift and civilizational divergence.

Sense Phase Systems

The Sense Phase is the agent’s gateway to understanding its environment and its own internal state changes. Systems in this phase are responsible for gathering raw data from the external world and processing internal or external events that inform the agent’s awareness. This information becomes the basis for memory formation, belief updates, and subsequent decision-making.

PerceptionSystem

The PerceptionSystem is responsible for simulating an agent’s ability to detect relevant entities, objects, and phenomena in its immediate surroundings.

  • Responsibility:
    • To identify entities within an agent’s sensory capabilities (defined by range, cones, or other sensory parameters).
    • To update a direct representation of the agent’s current understanding of its immediate environment.
    • To detect significant changes in perception (entities entering/leaving range, significant state changes in perceived entities) and, if subscribers exist, dispatch events for these changes.
  • Inputs:
    • Agent’s Position component.
    • Agent’s SensoryCapabilities (e.g., range, field of view, specific sense modalities).
    • The global SpatialHash (or equivalent spatial index) for efficient proximity queries.
    • The state of the agent’s CurrentPercepts component from the previous tick (facilitated by a double-buffering state management system for the engine, allowing access to current_tick_state and previous_tick_state).
  • Process:
    1. For each active agent, the PerceptionSystem queries the SpatialHash using the agent’s Position and SensoryCapabilities to get a list of potentially perceivable entities.
    2. This list is filtered (e.g., for line-of-sight, occlusion, specific entity types the agent is currently attuned to).
    3. The agent’s CurrentPercepts component is updated with this new set of perceived entities and their relevant observed states (e.g., position, type).
      // Component: CurrentPercepts
      // Stores entities currently within reliable sensory range.
      perceived_entities: List<{
          entity_id: EntityId,
          entity_type: TypeId,
          position: Vector3,
          // ... other directly observable details like hostility status if apparent
          _internal_seen_this_tick_flag: bool // Used for change detection
      }>
    4. Change Detection: The system compares the newly updated CurrentPercepts with the CurrentPercepts state from the previous tick.
    5. Event Dispatch (Conditional): If significant differences are found (e.g., a new ThreatSource enters range, a known FoodSource disappears), and if other systems have “subscribed” to be notified of such perceptual changes for this agent, the PerceptionSystem generates and dispatches specific events via the global Event Bus. Examples:
      • NewEntityInPerceptionRangeEvent(perceiving_agent_id, new_entity_id, new_entity_type, ...)
      • EntityLeftPerceptionRangeEvent(perceiving_agent_id, left_entity_id, ...)
      • PerceivedEntityStateChangedEvent(perceiving_agent_id, entity_id, changed_property, ...)
  • Scheduling: The PerceptionSystem typically runs every tick for agents that are active and aware. Its frequency might be reduced for agents in low-activity states or unobserved POIs.

This hybrid approach allows an agent’s internal cognitive systems to directly query the CurrentPercepts component for an immediate understanding of its surroundings, while also enabling other reactive systems to be driven by events triggered by significant perceptual changes.

MemoryRecordingSystem

This system is responsible for converting significant experiences – whether from direct perception, the outcomes of an agent’s own actions, or other witnessed events – into persistent MemoryEntry structures within the agent’s AgentMemoryStore.

  • Responsibility:
    • To process various types of incoming events that represent potentially memorable experiences for an agent.
    • To construct detailed MemoryEntry structures based on these events.
    • To add these MemoryEntrys to the relevant agent’s AgentMemoryStore, managing storage limits and significance.
  • Inputs (Listening to Events on the Event Bus):
    • PerceptionChangeEvents (e.g., NewEntityInPerceptionRangeEvent, PerceivedEntityStateChangedEvent) generated by the PerceptionSystem, especially those flagged as novel or highly salient (e.g., first sighting of a threat).
    • InteractionOutcomeEvent (generated by action execution systems in the Act Phase, detailing the result of an agent’s actions like eating, attacking, fleeing, etc.).
    • DamageTakenEvent, HealthRestoredEvent, NeedSatisfiedEvent, GoalCompletedEvent, GoalFailedEvent.
    • Potentially, CommunicationReceivedEvent (if the content is deemed memorable).
  • Process:
    1. For each relevant incoming event pertaining to an agent:
    2. Determine Memorable?: Not every tiny event needs to be memorized. A filtering step based on event type, intensity, or agent’s current state (e.g., high alert) might occur.
    3. Calculate Significance & Emotional Valence: Based on the event’s nature, its outcome, its relevance to the agent’s current needs and goals, and possibly the agent’s personality traits, a significance score and emotional_valence are determined for the potential memory.
    4. Construct MemoryEntry: A MemoryEntry is created, populating fields like timestamp, event_type, location, involved_entities, outcome, calculated significance and emotional_valence, and any specific details derived from the event payload.
    5. Store Memory: The new MemoryEntry is added to the agent’s AgentMemoryStore.recent_memory_entries list. The AgentMemoryStore’s internal logic (or this system) handles pruning/forgetting based on capacity, recency, and significance (as detailed in Section 3.2.2).
  • Scheduling: The MemoryRecordingSystem processes events that have occurred within the current tick or are dispatched for the current tick. It effectively translates transient event data into more persistent agent knowledge.

Together, the PerceptionSystem and MemoryRecordingSystem form the primary input pathway for an agent’s cognitive processes, enabling it to be aware of its current situation and build a history of its experiences.

Motivate Phase Systems

Once an agent has a sense of its environment and internal state (via perceptions and memories), the Motivate Phase systems translate this awareness, particularly the status of its fundamental Needs, into directed intentions or Goals. These goals provide the primary impetus for the agent’s subsequent planning and actions.

NeedManagementSystem

This system is responsible for overseeing the dynamic state of an agent’s various Need components. It doesn’t actively decay needs every tick per agent (as that’s handled by the projection and scheduling logic within the Need components themselves), but rather manages the events and updates related to needs crossing significant thresholds.

  • Responsibility:
    • To process NeedThresholdReachedEvents dispatched by the Universal Scheduler (e.g., when a need like SustenanceNeed is projected to cross its goal_trigger_threshold or critical_threshold).
    • To facilitate access to the current interpolated value of any Need for other systems.
    • To ensure that when a Need component’s parameters are changed by external factors (e.g., consumption of food, change in activity level affecting decay rate), its future scheduled threshold events are correctly cancelled and re-scheduled.
  • Inputs:
    • NeedThresholdReachedEvent(agent_id, need_type, threshold_type) events from the Universal Scheduler.
    • Requests from other systems to get current need values.
    • Notifications/calls from other systems when a Need component’s base values or decay rates are directly altered.
  • Process:
    1. Handling NeedThresholdReachedEvent:
      • When an event like NeedThresholdReachedEvent(agent_id, need_type: SUSTENANCE, threshold_type: GOAL_TRIGGER) is processed for an agent:
        • The system verifies the current interpolated value of the specified need_type on the agent_id.
        • If the condition is met (e.g., sustenance is indeed below goal trigger), it will typically generate a new, more specific event to trigger goal consideration, such as [NeedName]GoalCheckRequiredEvent(agent_id) (e.g., SustenanceGoalCheckRequiredEvent). This decouples need monitoring from the specifics of goal generation logic.
        • The system ensures that the next relevant threshold event for that need (e.g., the critical_threshold event if the goal_trigger_threshold just fired) is still appropriately scheduled, or re-schedules it if necessary. The _scheduled_goal_trigger_event_id (or critical) on the Need component is cleared as this event has now fired.
    2. Providing Current Need Value:
      • The system may offer a utility function like GetCurrentNeedValue(agent_id, need_type, current_tick): float. This function would access the agent’s relevant Need component, retrieve its current_value_at_last_update, last_updated_tick, and decay_rate_per_tick, and return the accurately interpolated value for the current_tick.
    3. Handling External Need Changes & Re-scheduling:
      • If an external system (e.g., EatFoodSystem) directly modifies a Need component (e.g., increases SustenanceNeed.current_value_at_last_update and updates its last_updated_tick), that system (or a utility function called by it, possibly within NeedManagementSystem) must ensure that any previously scheduled _scheduled_goal_trigger_event_id and _scheduled_critical_event_id for that need on that agent are cancelled with the Universal Scheduler.
      • New threshold events are then projected and scheduled based on the Need’s new state, and their IDs are stored back in the Need component (as described in Section 3.1.2). This keeps the scheduled events synchronized with the agent’s actual condition.
  • Scheduling: The NeedManagementSystem primarily reacts to scheduled NeedThresholdReachedEvents. Its utility functions for getting current values or re-scheduling would be called synchronously by other systems when needed.

This system ensures that needs are efficiently monitored and that their progression correctly triggers subsequent cognitive processes like goal generation.

GoalGenerationSystem

(e.g., SustenanceGoalGenerator, SafetyGoalGenerator)

The GoalGenerationSystem is responsible for instantiating Goal components on an agent when its internal state (primarily unmet Needs) or external stimuli suggest a specific objective should be pursued. There might be several specialized goal generator systems (e.g., one for sustenance, one for safety, one for social goals), or a more unified system that handles various triggers.

  • Responsibility:
    • To listen for events indicating that a need has crossed a goal-triggering threshold (e.g., [NeedName]GoalCheckRequiredEvent from the NeedManagementSystem).
    • To listen for other potential goal-inducing stimuli (e.g., a NewEntityInPerceptionRangeEvent for a significant threat might directly trigger a safety goal).
    • To check if a similar goal is already active to prevent duplicates.
    • To create and add the appropriate Goal component (e.g., SeekSustenanceGoal, SeekSafetyGoal, InvestigateSoundGoal) to the agent if conditions are met.
  • Inputs:
    • [NeedName]GoalCheckRequiredEvent(agent_id) events.
    • Relevant PerceptionChangeEvents (e.g., sighting of a known threat or a highly novel stimulus).
    • The agent’s current Need levels (queried via NeedManagementSystem.GetCurrentNeedValue or directly from components if within the same tick and after need updates).
    • The agent’s existing active Goal components (to avoid duplication).
    • The agent’s AgentBeliefStore and AgentMemoryStore (e.g., a memory of a recent attack might make a safety goal more likely to form even if immediate danger isn’t perceived).
  • Process:
    1. Event Trigger Processing:
      • Upon receiving an event like SustenanceGoalCheckRequiredEvent(agent_id):
        • The system checks if the agent_id already has an active SeekSustenanceGoal component. If so, it might only update its priority or do nothing further for this specific trigger.
        • If no such goal exists, it retrieves the current SustenanceNeed value. If still below the threshold, it proceeds to create the goal.
    2. Stimulus-Based Trigger Processing (Example: Threat Perception):
      • Upon receiving NewEntityInPerceptionRangeEvent(perceiving_agent_id, new_entity_id, new_entity_type) where new_entity_type is believed to be a ThreatSource (checked against AgentBeliefStore):
        • The system checks if perceiving_agent_id already has an active SeekSafetyGoal (perhaps related to a different threat or a general sense of unease).
        • If not, or if this new threat is deemed more immediate/dangerous (based on beliefs about new_entity_type or memories associated with new_entity_id), it creates/updates a SeekSafetyGoal.
    3. Goal Component Creation:
      • When a goal is to be created (e.g., SeekSustenanceGoal):
        • A new SeekSustenanceGoal component is instantiated.
        • Its priority is calculated. This is a crucial step and can be influenced by:
          • The severity of the triggering need (e.g., 1.0 - SustenanceNeed.current_value).
          • Beliefs about the importance of this need/goal.
          • The presence of other conflicting needs or threats (e.g., sustenance goal priority might be lowered if a high-danger threat is also present, deferring to a safety goal).
          • Personality traits (future refinement).
        • The new Goal component is added to the agent.
      • This might also generate an internal NewGoalAddedEvent(agent_id, goal_type) to trigger subsequent systems like GoalPrioritizationSystem or PotentialActionDerivationSystem within the same tick or the next, depending on event processing rules.
  • Scheduling: The GoalGenerationSystem typically processes events dispatched for the current tick. It reacts to conditions becoming true (needs unmet, threats appearing) by creating persistent Goal components that then drive further AI logic.

This system acts as a crucial bridge, translating an agent’s internal state and external perceptions into actionable objectives. The sophistication of its trigger conditions and priority calculations will significantly influence the believability of the agent’s motivations.

GoalPrioritizationSystem

Agents may often find themselves with multiple active Goal components simultaneously (e.g., needing sustenance, noticing a potential threat, and being curious about a sound). The GoalPrioritizationSystem is responsible for evaluating these concurrent goals and determining which one(s) the agent should focus on, effectively selecting or ranking the CurrentActiveGoal that will drive immediate action planning.

  • Responsibility:
    • To identify all active Goal components on an agent.
    • To assess and rank these goals based on a variety of factors, including their intrinsic priority, situational context, agent beliefs, and memories.
    • To designate a primary CurrentActiveGoal (or a ranked list of top goals) that the subsequent planning and action selection phases will address.
  • Inputs:
    • All active Goal components on an agent (e.g., SeekSustenanceGoal.priority, SeekSafetyGoal.priority).
    • The agent’s current Need levels (to understand the urgency behind need-driven goals).
    • The agent’s CurrentPercepts (e.g., the immediate presence of a high-level threat might elevate a SeekSafetyGoal’s effective priority).
    • The agent’s AgentBeliefStore (e.g., beliefs about the consequences of ignoring a certain goal, or beliefs about the difficulty/risk of pursuing one goal versus another).
    • The agent’s AgentMemoryStore (e.g., recent negative experiences related to ignoring a safety goal might temporarily boost its ranking).
    • (Future Refinement: Agent’s personality traits, mood, current tasks/roles).
  • Process:
    1. Gather Active Goals: The system collects all entities with one or more active Goal components.
    2. Calculate Effective Priority for Each Goal: For each active goal on an agent, its “effective priority” or “situational urgency” is calculated. This is more than just the priority field stored in the goal component itself; it’s a dynamic assessment. Factors include:
      • Base Priority: The priority value set when the goal was generated (often tied to need urgency).
      • Contextual Modifiers (from Percepts, Beliefs, Memory):
        • Threat Amplification: If a SeekSafetyGoal is active and a ThreatSource is currently perceived in CurrentPercepts, its effective priority is significantly increased.
        • Opportunity Cost/Benefit (Beliefs): If pursuing Goal A (e.g., InvestigateSound) is believed to lead to a high reward but also expose to danger, while Goal B (SeekSustenance) is low risk but essential, this influences ranking.
        • Past Experience (Memory): A memory of recently almost starving might make SeekSustenanceGoal maintain a higher effective priority even if another, less critical goal, has a slightly higher base priority.
        • Goal Interdependencies (Advanced): Goal A might be a prerequisite for Goal B.
    3. Select CurrentActiveGoal:
      • The goal with the highest calculated effective priority is typically selected as the CurrentActiveGoal.
      • This might involve adding a specific component like CurrentActiveGoalFocus(goal_entity_id_or_type) to the agent, or updating a field within the agent’s main “AI state” component.
      • In more complex scenarios, the system might maintain a short, ranked list of top N goals, allowing the agent to potentially interleave actions or quickly switch if the top goal becomes blocked. For this draft, selecting a single primary goal is sufficient.
    4. Handling Goal Switching (Conceptual): If the CurrentActiveGoal changes from one tick to the next (e.g., a new, more urgent threat appears), this system is responsible for flagging this change. This might trigger an interrupt for any ongoing action related to the previous goal, forcing a re-planning cycle.
  • Scheduling: The GoalPrioritizationSystem would typically run each tick for agents with multiple active goals, or whenever a new significant goal is added or a major contextual change occurs (e.g., new high-priority threat perceived). Its output (the CurrentActiveGoal) directly feeds into the Plan & Appraise phase systems.

This system ensures that agents behave rationally by focusing their efforts on what is most important or urgent in their current situation, considering a holistic view of their needs, environment, and internal knowledge. The sophistication of the “effective priority” calculation is a key area for tuning agent personality and intelligence.

Plan & Appraise Phase Systems

Once an agent has a prioritized CurrentActiveGoal, the Plan & Appraise phase systems are responsible for figuring out how to achieve that goal. This involves first deriving potential sequences of actions (plans) that could satisfy the goal, and then evaluating or “appraising” those potential actions to select the most suitable one based on the agent’s current knowledge, beliefs, and the perceived state of the world.

**PotentialActionDerivationSystem

This system is tasked with generating a set of candidate actions or short action sequences that an agent could take to address its CurrentActiveGoal.

  • Responsibility:
    • Based on the type of CurrentActiveGoal, identify relevant known strategies or behaviors.
    • Query the agent’s knowledge (KnownFoodLocation from CurrentPercepts or AgentMemoryStore, AgentBeliefStore about object properties, etc.) to instantiate these strategies into concrete PotentialAction candidates.
  • Inputs:
    • The agent’s CurrentActiveGoal (e.g., SeekSustenanceGoal, SeekSafetyFromThreat(ThreatEntityID)).
    • Agent’s CurrentPercepts (e.g., locations of visible food, threats, escape routes).
    • Agent’s AgentMemoryStore (e.g., remembered locations of resources not currently in direct perception).
    • Agent’s AgentBeliefStore (e.g., beliefs about what actions are effective for certain goals, or what objects can be interacted with in certain ways).
  • Process:
    1. Identify Goal Type: The system examines the CurrentActiveGoal.
    2. Retrieve Applicable Strategies/Behaviors: Based on the goal type, it accesses a set of predefined (or learned, in advanced agents) strategies.
      • For SeekSustenanceGoal:
        • Strategy 1: “Go to known/perceived food and eat.”
        • Strategy 2: “Search for new food sources.”
        • Strategy 3 (if applicable): “Request food from another agent.”
      • For SeekSafetyFromThreat(ThreatEntityID):
        • Strategy 1: “Flee from threat.”
        • Strategy 2 (if capable): “Hide from threat.”
        • Strategy 3 (if capable & appropriate): “Confront/Attack threat.”
    3. Instantiate PotentialAction Candidates: For each applicable strategy, the system attempts to generate one or more concrete PotentialAction data structures.
      • Example: For “Go to known/perceived food and eat”:
        • Query CurrentPercepts and AgentMemoryStore for KnownFoodLocations.
        • For each found food location, create a PotentialAction like: { action_sequence: [MoveTo(FoodLocationX), Eat(FoodSourceAtX)], associated_goal: SeekSustenanceGoal, estimated_outcome: ... }
      • Example: For “Flee from threat”:
        • Identify viable escape directions (e.g., away from ThreatEntityID’s position, towards known safe zones based on AgentMemoryStore or AgentBeliefStore).
        • Create PotentialActions like: { action_sequence: [MoveTo(SafePointY)], associated_goal: SeekSafetyFromThreat, estimated_outcome: ... }
    4. Output: A list of PotentialAction data structures is passed to the ActionAppraisalSystem. These are not yet committed actions, just possibilities.
  • Scheduling: This system runs when an agent has a CurrentActiveGoal but no committed plan of action, or if its current plan is invalidated or completed and the goal persists.

ActionAppraisalSystem (Orchestrator)

This is a critical cognitive system where the agent evaluates the generated PotentialAction candidates to determine their desirability. It acts as an orchestrator, invoking various specialized evaluators.

  • Responsibility:
    • For each PotentialAction derived for the CurrentActiveGoal, calculate an overall “appraisal score” or “utility.”
    • This score reflects the agent’s “opinion” of the action, considering its likely benefits, costs, risks, and alignment with its internal state.
  • Inputs:
    • The list of PotentialAction candidates from the PotentialActionDerivationSystem.
    • The agent’s CurrentActiveGoal.
    • The agent’s full core state: Needs, CurrentPercepts, AgentMemoryStore, AgentBeliefStore.
  • Process (Orchestration):
    1. For each PotentialAction in the list: 2. Initialize an appraisal context for this action. 3. Invoke a series of AppraisalCriterionEvaluator functions/sub-systems. Each evaluator focuses on a specific aspect of the action: * EvaluateExpectedNeedSatisfaction(action, agent_state, target_need): How well is this action expected to satisfy the primary need driving the CurrentActiveGoal? (e.g., checks FoodSource.sustenance_value for an Eat action, modified by beliefs about that food type). Returns a benefit score. * EvaluateImpactOnOtherNeeds(action, agent_state): How might this action affect other needs? (e.g., a long travel action might negatively impact RestNeed). Returns a cost/benefit score for secondary needs. * EvaluateRiskToSafety(action, agent_state): Assesses risks based on CurrentPercepts (e.g., path to food goes near a perceived threat), AgentMemoryStore (e.g., “last time I went there, I was attacked”), and AgentBeliefStore (e.g., “this area is believed to be dangerous”). Returns a risk score (e.g., probability of negative outcome or severity). * EvaluateTimeAndEnergyCost(action, agent_state): Estimates the time and/or energy expenditure required for the action (e.g., travel distance). Returns a cost score. * EvaluateLikelihoodOfSuccess(action, agent_state): Based on AgentBeliefStore (e.g., “I believe I am capable of performing this action”) and CurrentPercepts (e.g., “the path is not blocked”). Returns a probability. * EvaluateResourceAvailability(action, agent_state): Checks if the agent possesses necessary tools or resources. * (Future: EvaluateSocialConsequences, EvaluateAlignmentWithPersonalityTraits) 4. Each AppraisalCriterionEvaluator queries the relevant parts of the agent’s state (Needs, Beliefs, Memory, Percepts) to compute its specific score contribution. 5. The ActionAppraisalSystem aggregates these individual scores into a single TotalAppraisalScore for the PotentialAction. The aggregation method can be a weighted sum, a multiplicative approach, or a more complex utility function. The weights themselves could be influenced by the agent’s personality or current dominant Need. PotentialAction.appraisal_score = calculate_aggregate_score(score_from_evaluator1, score_from_evaluator2, ...)
  • Scheduling: This system runs after PotentialActionDerivationSystem has produced candidates for the CurrentActiveGoal.

The AppraisalCriterionEvaluator functions are where much of the agent’s specific intelligence, knowledge, and biases are encoded. Their modularity allows for a highly extensible and tunable appraisal process, enabling different agent archetypes to “think” about their options in distinct ways.

Act Phase Systems

Once a PotentialAction has been selected through appraisal, the Act Phase systems are responsible for translating that decision into concrete operations within the game world. This involves initiating the chosen action, managing its execution over time (if it’s not instantaneous), and handling its eventual outcome.

4.1. ActionSelectionSystem This system makes the final commitment to an action based on the appraisals.

  • Responsibility:
    • To review the PotentialAction candidates and their associated TotalAppraisalScore (as calculated by the ActionAppraisalSystem).
    • To select the “best” action to pursue for the CurrentActiveGoal.
    • To instantiate the necessary top-level action component(s) on the agent to begin execution of the chosen action sequence.
  • Inputs:
    • The list of PotentialAction candidates, each with its TotalAppraisalScore.
    • The agent’s CurrentActiveGoal.
  • Process:
    1. Select Best Action: The system typically selects the PotentialAction with the highest TotalAppraisalScore.
      • (Future Refinements: It might incorporate randomness for less predictable behavior if scores are very close, or apply “satisficing” logic – picking the first “good enough” action rather than always the absolute optimal, to save computation or simulate bounded rationality).
    2. Instantiate Action Component(s): For the chosen PotentialAction (which may be a sequence like [MoveTo(X), Eat(Y)]), the system instantiates the first concrete action component from that sequence.
      • Example: If the chosen action is MoveTo(FoodLocationX) then Eat(FoodSourceAtX), it adds a MoveToTargetAction component to the agent:
        // Add Component: MoveToTargetAction
        // target_position = FoodLocationX.position
        // target_entity_id = FoodLocationX.food_entity_id
        // goal_type_driving_action = SEEK_SUSTENANCE
        // intended_on_arrival_interaction = EAT_FOOD_INTERACTION
        // _scheduled_arrival_event_id = None // To be set by MovementSystem
    3. Update Agent State: The agent’s internal state might be updated to reflect that it is now “busy” or “executing a plan.” The CurrentActiveGoal component might have its status updated (e.g., to EXECUTING_PLAN).
  • Scheduling: This system runs after the ActionAppraisalSystem has completed its evaluations for the current tick and CurrentActiveGoal.

MovementSystem (Handles MoveToTargetAction) This system is responsible for actually moving agents through the world when they have an active MoveToTargetAction.

  • Responsibility:
    • To calculate paths to the target position (if not a direct line).
    • To update the agent’s Position component incrementally each tick.
    • To detect arrival at the target or if movement is blocked.
    • To schedule an ArrivalAtTargetEvent or MovementBlockedEvent with the Universal Scheduler.
  • Inputs:
    • Agent entities with an active MoveToTargetAction component and a Position component.
    • World collision data / navigation mesh (for pathfinding).
  • Process (Each Tick for an Agent with MoveToTargetAction):
    1. Pathfinding (If Needed): If no path is yet calculated or the current path is invalidated (e.g., by new obstacles), attempt to find a path to MoveToTargetAction.target_position. If no path, schedule MovementBlockedEvent.
    2. Move Agent: Update the agent’s Position component along the path by a distance determined by its speed and deltaTime.
    3. Check for Arrival/Blockage:
      • If agent reaches target_position (or is within a small threshold): Schedule an ArrivalAtTargetEvent(agent_id, target_entity_id, intended_on_arrival_interaction, goal_type_driving_action) for the current or next tick. The MoveToTargetAction component might be removed or marked as completed.
      • If movement is blocked for a significant duration or an impassable obstacle is encountered: Schedule a MovementBlockedEvent(agent_id, reason) for the current or next tick. The agent might then need to re-plan.
    4. Scheduling of Self (Conceptual for Ongoing Movement): While not explicitly creating an event every tick for movement, the MovementSystem is part of the tick loop. The “event” aspect comes from the scheduling of the completion or failure of the entire movement action. The _scheduled_arrival_event_id in MoveToTargetAction could be used if the system pre-calculates the exact arrival tick.
  • Rendering Interpolation: For smooth visuals, the rendering system would interpolate the agent’s position between its Position at previous_tick_state and current_tick_state.

InteractionInitiationSystem

This system acts as a bridge between completing a movement (or other preparatory action) and starting the intended interaction.

  • Responsibility:
    • To listen for events indicating an agent is ready to perform a specific interaction (e.g., ArrivalAtTargetEvent).
    • To add the appropriate specific interaction component (e.g., PerformEatAction) to the agent based on the context of the preceding action.
  • Inputs:
    • ArrivalAtTargetEvent(agent_id, target_entity_id, intended_on_arrival_interaction, goal_type_driving_action)
    • Other events that might signify readiness for interaction (e.g., ToolEquippedEvent if a tool was needed).
  • Process:
    1. When an ArrivalAtTargetEvent is received for an agent_id:
    2. Based on intended_on_arrival_interaction:
      • If EAT_FOOD_INTERACTION: Add PerformEatAction(agent_id, food_target_id: target_entity_id) component.
      • If ATTACK_INTERACTION: Add PerformAttackAction(agent_id, enemy_target_id: target_entity_id) component.
      • If FLEE_INTERACTION_POINT_REACHED: (This might be a special case where “arrival” means “successfully fled to this spot” and might resolve the flee goal).
      • And so on for other interaction types.
  • Scheduling: This system processes events dispatched for the current tick.

Specific Interaction Execution Systems

(e.g., EatFoodSystem, FleeSystem, CombatSystem)

These are a suite of specialized systems, each responsible for executing a particular type of interaction.

  • Responsibility (Example: EatFoodSystem for PerformEatAction):
    • To execute the logic of the interaction (e.g., transferring sustenance from food to agent).
    • To determine the outcome of the interaction (success, failure, partial success).
    • To generate an InteractionOutcomeEvent detailing the result.
  • Inputs (Example: EatFoodSystem):
    • Agent entities with a PerformEatAction(food_target_id) component.
    • The agent’s SustenanceNeed component.
    • The FoodSource component on the food_target_id.
  • Process (Example: EatFoodSystem):
    1. Verify the food_target_id still exists and is a valid FoodSource.
    2. Access FoodSource.sustenance_value.
    3. Increase Agent.SustenanceNeed.current_value_at_last_update (and update its last_updated_tick). Trigger re-scheduling of need events for this agent (as per Section 2.1).
    4. (Optional: Decrease FoodSource.quantity, potentially destroying the FoodSource if depleted).
    5. Generate InteractionOutcomeEvent(agent_id, interaction_type: EAT, target_id: food_target_id, success: true, details: {satiation_gained: X, food_depleted: Y}).
    6. Remove the PerformEatAction component.
  • Scheduling: These systems process agents with their specific action components each tick, or an interaction might take multiple ticks and schedule its own completion event. For simplicity in this draft, we can assume many basic interactions (like eating a single item) complete within a single tick once initiated.

This Act Phase, with its chain of selection, movement, initiation, and execution, brings the agent’s decisions to life in the simulation. The InteractionOutcomeEvent is the critical output that feeds back into the Learn Phase.

Learn Phase Systems

The Learn Phase is where the agent processes the outcomes of its actions and significant environmental events, updating its internal state (Needs, Memories, Beliefs) based on these experiences. This feedback loop is crucial for adaptation and more nuanced future behavior.

GoalResolutionSystem This system is responsible for determining if an agent’s active goals have been met or should be abandoned based on recent events and state changes.

  • Responsibility:
    • To monitor InteractionOutcomeEvents and changes in Need levels.
    • To assess if the CurrentActiveGoal (and potentially other active goals) has been satisfied or is no longer viable/relevant.
    • To remove completed or obsolete Goal components from the agent.
  • Inputs:
    • InteractionOutcomeEvents (e.g., successful consumption of food for a SeekSustenanceGoal).
    • Notifications of significant Need level changes (e.g., if a need is fully satisfied).
    • The agent’s active Goal components and its CurrentActiveGoalFocus.
    • (Future: Events indicating a goal has become impossible, e.g., target destroyed).
  • Process:
    1. Listen for Relevant Events:
      • Upon receiving an InteractionOutcomeEvent for an agent (e.g., interaction_type: EAT, success: true):
        • Check if the agent’s CurrentActiveGoalFocus was related (e.g., SeekSustenanceGoal).
        • Query the relevant Need (e.g., SustenanceNeed). If the need is now above its goal_trigger_threshold (or a specific “satisfied” threshold), then the goal is considered achieved.
    2. Check Need-Driven Goals Directly:
      • Periodically, or when a Need is significantly satisfied, this system might check all need-driven goals. If the underlying Need for a Goal (e.g., SustenanceNeed for SeekSustenanceGoal) is no longer below its trigger threshold, the Goal component can be removed.
    3. Remove Goal Component: If a goal is deemed satisfied or obsolete:
      • The corresponding Goal component (e.g., SeekSustenanceGoal) is removed from the agent.
      • If the agent had a CurrentActiveGoalFocus component pointing to this goal, it is also removed or cleared.
      • This might generate an internal GoalCompletedEvent(agent_id, goal_type) or GoalAbandonedEvent, which could trigger other systems (e.g., a mood update system, or the GoalPrioritizationSystem to select a new CurrentActiveGoal if other goals are pending).
  • Scheduling: This system processes relevant events dispatched for the current tick and may also run periodically for agents to clean up goals whose underlying needs have been met through means not directly tied to a specific interaction outcome event.

BeliefUpdateSystem

(or part of MemorySystem/CognitiveSystem)

This system is responsible for the formation of new beliefs and the modification (reinforcement or challenging) of existing beliefs based on an agent’s experiences (as recorded in MemoryEntrys) and potentially other stimuli like communication.

  • Responsibility:
    • To analyze new MemoryEntrys in the AgentMemoryStore.
    • To identify patterns, correlations, and significant outcomes that could lead to new beliefs or adjustments to existing ones.
    • To update the agent’s AgentBeliefStore with these changes.
  • Inputs:
    • Newly created MemoryEntrys (signaled by events from MemoryRecordingSystem or by directly querying recent additions to AgentMemoryStore).
    • The agent’s existing AgentBeliefStore.
    • (Future: CommunicationReceivedEvent containing information/assertions from other agents).
  • Process (Conceptual - building on Section 3.3.2):
    1. Triggering Belief Review: This system might be triggered by:
      • The addition of a highly significant new MemoryEntry.
      • A periodic “reflection” event scheduled for the agent.
    2. Pattern Detection / Memory Analysis (The “Systems within Systems”):
      • Invokes various “Pattern Detector” sub-systems or rules that scan the AgentMemoryStore (focusing on recent/significant memories). Examples:
        • Consistent Outcome Detector: Looks for multiple memories where interacting with a specific subject_identifier (or TypeId) in a certain way (property related to an action) consistently leads to a particular outcome or emotional_valence.
        • Co-occurrence Detector: Identifies frequent spatial or temporal co-occurrence of certain entities or event types.
    3. Belief Modification/Formation:
      • Reinforcement: If a new memory’s outcome and details support an existing BeliefEntry, that belief’s strength and/or confidence are increased. The last_updated_tick is refreshed.
      • Challenge/Weakening: If a new memory contradicts an existing BeliefEntry, the belief’s strength and/or confidence may decrease.
      • New Belief Formation: If a strong pattern is detected in memories where no corresponding BeliefEntry exists, a new belief is created. Its initial strength, confidence, and value are derived from the characteristics of the supporting memories (e.g., consistency of pattern, average significance/outcome of memories). source_type would be DIRECT_EXPERIENCE.
    4. Hearsay Integration (Conceptual):
      • If processing a CommunicationReceivedEvent, a new BeliefEntry might be formed with source_type=HEARSAY. Its initial strength/confidence would be modulated by a belief about the informant’s trustworthiness. Subsequent direct experiences (memories) could then reinforce or challenge this hearsay-based belief.
    5. Conflict Management (Simplified for this Draft): For now, if a new strong belief directly contradicts an old weak one, the old one might be heavily weakened or overwritten. More complex conflict resolution (e.g., maintaining conflicting beliefs with different confidence levels) is a future refinement.
  • Scheduling: This system can be event-driven (reacting to new significant memories) and/or have parts that run as periodic scheduled tasks for each agent (e.g., “perform belief review cycle every N ticks”).

The Learn Phase closes the agent’s cognitive loop, allowing its internal model of the world and itself to evolve based on the consequences of its actions and observations. This iterative process is fundamental to the agent’s ability to exhibit adaptive and increasingly sophisticated behavior over time.


The behavior and decision-making of an agent are fundamentally driven by its internal state. This state is represented by a collection of specialized components attached to the agent’s entity. These components store information about the agent’s physiological and psychological condition, its memories of past events, and its beliefs about the world.

[NeedName]Need Components (e.g., SustenanceNeed)

Needs are the primary motivators for agent behavior, representing requirements for survival, well-being, or other intrinsic drives. Each distinct need is represented by its own component type. For this draft, we will use SustenanceNeed as the primary example.

Data Structure

A typical Need component, such as SustenanceNeed, would contain the following data:

  • current_value_at_last_update: float (Range: e.g., 0.0 to 1.0, where 1.0 indicates the need is fully satisfied, 0.0 indicates critical deficiency). This stores the need’s value as of the last_updated_tick.
  • last_updated_tick: int (The simulation tick at which current_value_at_last_update was accurately calculated and stored).
  • decay_rate_per_tick: float (The amount by which the need’s value decreases per simulation tick under normal conditions. This rate can be modified by agent activity, environment, or other factors).
  • goal_trigger_threshold: float (e.g., 0.4. If the need’s projected value drops below this, a corresponding goal, like SeekSustenanceGoal, is typically generated).
  • critical_threshold: float (e.g., 0.1. If the need’s projected value drops below this, more severe consequences may occur, such as health damage or incapacitation, and it may trigger higher priority goals).
  • _scheduled_goal_trigger_event_id: Option<EventID> (Stores the ID of the event currently scheduled with the Universal Scheduler for when this need is projected to cross the goal_trigger_threshold. None if no such event is currently scheduled or if the need is above the threshold).
  • _scheduled_critical_event_id: Option<EventID> (Stores the ID of the event currently scheduled for when this need is projected to cross the critical_threshold).

The actual current value of the need at any given current_tick can be interpolated as: interpolated_value = current_value_at_last_update - (decay_rate_per_tick * (current_tick - last_updated_tick)) This interpolation is used by systems that require an up-to-date need value before its next scheduled full update.

Interaction with Event Scheduler

Need components are dynamically managed via the Universal Scheduler to avoid constant polling:

  1. Initialization/Change: When a Need component is added to an agent, or when its state changes significantly (e.g., after eating, changing activity level which alters decay_rate_per_tick), its current value is calculated, and current_value_at_last_update and last_updated_tick are set.
  2. Projection & Scheduling:
    • The system then projects how many ticks it will take for the current_value to reach the goal_trigger_threshold and the critical_threshold based on the current decay_rate_per_tick.
    • If these projected times are in the future:
      • Any existing _scheduled_goal_trigger_event_id (and _scheduled_critical_event_id) for this need on this agent are first canceled with the Universal Scheduler.
      • New events (e.g., NeedThresholdReachedEvent(agent_id, need_type: SUSTENANCE, threshold_type: GOAL_TRIGGER)) are scheduled for the calculated future ticks. The IDs of these new scheduled events are stored in _scheduled_goal_trigger_event_id and _scheduled_critical_event_id.
  3. Event Handling: When a scheduled NeedThresholdReachedEvent fires for an agent, the NeedManagementSystem (detailed later) will process it. This typically involves updating the agent’s state (e.g., triggering goal generation if appropriate) and then re-evaluating and re-scheduling the next event for that need, if applicable (e.g., scheduling the critical threshold event if the goal trigger has just fired).

This scheduled approach ensures that need-related logic is only triggered when a need actually crosses a significant threshold, greatly improving simulation efficiency for many agents.

AgentMemoryStore Component

An agent’s ability to learn and adapt is fundamentally tied to its memory of past experiences. The AgentMemoryStore component serves as a centralized repository for an agent’s recollections of significant events. Unlike Need components, which typically have one instance per need type, an agent will usually have a single AgentMemoryStore component that internally manages a collection of individual memories.

Internal MemoryEntry Structure

The AgentMemoryStore contains a collection of MemoryEntry structures, each representing a distinct remembered event or experience. The structure of a MemoryEntry is designed to be flexible yet informative:

  • id: UniqueId (A unique identifier for this specific memory entry, potentially useful for linking beliefs to supporting memories or for debugging).
  • timestamp: int (The simulation tick at which the event occurred or was fully processed and recorded into memory).
  • event_type: Enum_EventType (Categorizes the memory, e.g., ATE_FOOD, SAW_ENTITY, WAS_ATTACKED, HEARD_SOUND, COMPLETED_TASK, RECEIVED_DAMAGE, INTERACTION_OUTCOME). This is crucial for querying and filtering memories.
  • location: Option<Vector3> (The geographical coordinates where the event took place, if applicable. None for non-spatial events).
  • involved_entities: List<{entity_id: EntityId, role: Enum_EntityRole_InEvent}> (A list of entities involved in the event and their roles. Roles might include TARGET, SOURCE, ALLY, ENEMY, FOOD_SOURCE_CONSUMED, TOOL_USED, etc.). This allows memories to be contextually linked to other entities.
  • outcome: Option<Enum_EventOutcome> (Describes the result of the event for the agent, e.g., POSITIVE_HIGH, POSITIVE_MODERATE, NEUTRAL, NEGATIVE_MODERATE, NEGATIVE_HIGH. This is key for learning and reinforcement).
  • significance: float (A normalized value, e.g., 0.0 to 1.0, indicating the subjective importance or salience of this memory to the agent. Higher significance might make a memory persist longer or have a stronger influence on belief formation).
  • emotional_valence: Option<float> (A value, e.g., -1.0 (very negative) to 1.0 (very positive), representing the emotional tinge associated with the memory. This can influence mood and future appraisals).
  • details: Map<String, AnyType> (A flexible key-value store for event-specific data not covered by other fields. For example, for an ATE_FOOD event: {"food_type_id": "RedBerry", "satiation_gained_value": 0.5, "was_poisonous": false}).

The AgentMemoryStore component itself might internally store these MemoryEntry structures in a list, perhaps with a fixed capacity, or use more sophisticated data structures for efficient querying if memory recall becomes a performance bottleneck.

// Component: AgentMemoryStore
// --- Internal Data ---
// recent_memory_entries: List<MemoryEntry> // Example internal storage

Management (Recent & Significant, Summarization - Conceptual)

To prevent unbounded memory growth and to simulate realistic cognitive limitations, the AgentMemoryStore requires management:

  • Recency and Significance: The store will typically prioritize keeping memories that are either very recent or have a high significance score. A common strategy is to maintain a capacity-limited list (e.g., the last N significant events). When new memories are added and capacity is exceeded, the oldest or least significant memories might be discarded or “archived.”
  • Significance Decay/Update: The significance of a memory might decay over time if not reinforced by related experiences or reflection. Conversely, recalling a memory or experiencing a similar event might boost its significance.
  • Memory Pruning/Forgetting: A dedicated MemorySystem (detailed later) would be responsible for these pruning and significance update processes, possibly as a periodic scheduled task for the agent.
  • Conceptual: Memory Summarization/Abstraction: For long-term learning and efficiency, the MemorySystem might conceptually engage in summarization. Multiple similar, low-significance memories (e.g., ten separate instances of successfully eating a common berry with minor positive outcomes) could eventually be abstracted or consolidated. This might lead to the strengthening of a related Belief (e.g., “Common berries are a reliable food source”) and the eventual pruning of the now-redundant individual memory entries. This keeps the active memory store focused on impactful and informative recollections.

The AgentMemoryStore provides the raw experiential data that fuels an agent’s learning, belief formation, and nuanced decision-making in the ActionAppraisalSystem.

AgentBeliefStore Component

Beliefs represent an agent’s generalized knowledge, assumptions, and interpretations about the world, its entities, and its underlying principles. They are typically formed from patterns in memory, direct instruction, or innate predispositions, and they play a crucial role in shaping an agent’s goals, plans, and appraisals of potential actions. Like the AgentMemoryStore, an agent typically possesses a single AgentBeliefStore component that manages a collection of individual BeliefEntry structures.

Internal BeliefEntry Structure

Each BeliefEntry encapsulates a single piece of generalized knowledge or a conviction held by the agent.

  • id: UniqueId (A unique identifier for this belief entry).
  • subject_identifier: Union<EntityId, TypeId, ConceptId> (Defines what the belief is about. This could be a specific instance of an entity (e.g., PlayerCharacter_Alice), a type or category of entity/object (e.g., WolfPredator_Type, RedBerryBush_Type), or an abstract concept (e.g., Concept_Trustworthiness, Concept_Betrayal)).
  • property: Enum_BeliefProperty (Specifies the aspect of the subject_identifier that the belief pertains to, e.g., IS_SAFE, IS_EDIBLE, IS_HOSTILE, IS_RELIABLE_INFO_SOURCE, HAS_HIGH_SATIATION_VALUE, LEADS_TO_DANGER).
  • value: AnyType (The actual content or assertion of the belief. This can be a boolean (e.g., for IS_EDIBLE: true), a float (e.g., for a belief about “Likelihood of Attack”: 0.75), an enum (e.g., for “FactionStanding”: FRIENDLY), or even a reference to another ConceptId. The interpretation of value is context-dependent based on the property).
  • strength: float (Normalized 0.0 to 1.0. Represents how strongly or deeply ingrained this belief is. High strength beliefs are more resistant to change, even in the face of conflicting evidence).
  • confidence: float (Normalized 0.0 to 1.0. Represents the agent’s certainty in the accuracy or truthfulness of this belief. A belief can be strongly held (high strength) but have low confidence if, for example, it’s based on old or questionable information).
  • last_updated_tick: int (The simulation tick when this belief was last formed, reinforced, or significantly challenged).
  • source_type: Enum_BeliefSource (Indicates the origin of the belief, e.g., DIRECT_EXPERIENCE (from memory patterns), HEARSAY (told by another agent), DEDUCTION (inferred from other beliefs), INNATE (part of initial agent state), OBSERVATION_OF_OTHERS).
  • _supporting_memory_ids: Option<List<MemoryEntry.id>> (Conceptually, for future refinement: a list of IDs from AgentMemoryStore that provide evidence for this belief).
  • _conflicting_memory_ids: Option<List<MemoryEntry.id>> (Conceptually, for future refinement: memory IDs that contradict this belief).

The AgentBeliefStore component itself would contain a list or other suitable collection of these BeliefEntry structures.

// Component: AgentBeliefStore
// --- Internal Data ---
// current_beliefs: List<BeliefEntry> // Example internal storage

The interplay between value, strength, and confidence is key. For example, an agent might have: BeliefEntry(subject_type=ShadowyFigure_Type, property=IS_DANGEROUS, value=true, strength=0.9, confidence=0.5). This means the agent strongly believes (strength 0.9) that shadowy figures are dangerous, perhaps due to an innate fear or a single impactful but unverified story (low confidence 0.5). This would likely lead to cautious behavior (high strength dictates action tendency), but the low confidence might make the agent more receptive to new evidence that could alter this belief compared to a belief with 0.9 confidence.

Management and Formation (Conceptual)

Managing the AgentBeliefStore involves several conceptual processes, likely handled by a BeliefSystem or a broader CognitiveSystem. For this draft, the focus is on laying the groundwork:

  • Formation Focus for this Draft:
    • From Memory Patterns (Primary): The core mechanism envisioned involves one or more “Pattern Detector” sub-systems analyzing an agent’s AgentMemoryStore for recurring patterns, associations, and outcomes. Consistent correlations (e.g., “every time I ate RedBerries, my SustenanceNeed increased significantly and outcome was POSITIVE”) can lead to the formation or reinforcement of BeliefEntrys (e.g., Belief(subject_type=RedBerry_Type, property=HAS_HIGH_SATIATION_VALUE, value=true)). This process may involve thresholds (e.g., pattern observed N times) to form a new belief, with initial strength and confidence based on the consistency and significance of supporting memories.
    • From Hearsay/Communication (Secondary): When an agent receives information from another agent, it might form a new BeliefEntry with source_type=HEARSAY. The initial strength and confidence of such beliefs would ideally depend on the perceived trustworthiness of the informant (which itself could be represented by another belief).
  • Reinforcement/Challenge Focus for this Draft:
    • When new MemoryEntrys are recorded that align with an existing belief, that belief’s strength and/or confidence may increase.
    • Memories conflicting with an existing belief may decrease its strength and/or confidence. Significant or repeated contradictions might lead to the belief being heavily weakened, or (conceptually for future refinement) the formation of a new, competing belief.
  • Future Considerations (Explicitly Deferred for this Draft):
    • Advanced deduction/inference (complex logical chains from existing beliefs).
    • Robust conflict resolution between diametrically opposed strong beliefs.
    • Belief decay mechanisms (beliefs weakening over time if not reinforced).

The AgentBeliefStore, even with these foundational formation/update mechanisms, provides a dynamic knowledge base that heavily influences how an agent interprets its perceptions and appraises its options. The “Pattern Detector” concept allows for modular and extensible learning capabilities.

Agent Initial State

To ensure agents can exhibit meaningful behavior from their inception within the simulation, and to provide a concrete basis for demonstrating the cognitive loop, each agent begins with a defined initial state. This state includes starting levels for its needs, a foundational set of beliefs, and potentially some key initial memories. This initial configuration can be thought of as an “Agent Archetype” or template, which in the broader context of ATET, might be influenced by an Incarnation’s nature, inherited Eidos, or the Tapestry’s characteristics.

For the purpose of this document and the illustrative “toy example” in Section 5, we will define a simple initial state.

Example Pre-seeded Needs, Beliefs, and Memories

Let’s assume our example agent is a small, herbivorous creature. Its initial state upon creation at simulation_tick = 0 might be:

  • Initial Need Levels:

    • SustenanceNeed:
      • current_value_at_last_update: 0.7 (Partially satiated, but will need food eventually)
      • last_updated_tick: 0
      • decay_rate_per_tick: (A default value, e.g., 0.0001)
      • goal_trigger_threshold: 0.4
      • critical_threshold: 0.1
      • _scheduled_goal_trigger_event_id: (Calculated and set based on above values)
      • _scheduled_critical_event_id: (Calculated and set)
    • (Other needs like SafetyNeed, RestNeed could also be initialized here if they were part of the agent’s component set).
  • Pre-seeded BeliefEntrys in AgentBeliefStore: These represent innate knowledge or very basic, pre-learned understanding crucial for immediate survival or interaction.

    1. BeliefEntry (Edible Food Source):
      • id: (Unique)
      • subject_identifier: TypeId("RedBerryBush_Type")
      • property: IS_EDIBLE
      • value: true
      • strength: 0.6 (Moderately strong belief)
      • confidence: 0.5 (Moderately confident, open to experience)
      • last_updated_tick: 0
      • source_type: INNATE
    2. BeliefEntry (Known Food Property):
      • id: (Unique)
      • subject_identifier: TypeId("RedBerryBush_Type")
      • property: HAS_MODERATE_SATIATION_VALUE
      • value: true (Could also be a float representing expected value)
      • strength: 0.5
      • confidence: 0.4
      • last_updated_tick: 0
      • source_type: INNATE
    3. BeliefEntry (Known Threat):
      • id: (Unique)
      • subject_identifier: TypeId("WolfPredator_Type")
      • property: IS_DANGEROUS
      • value: true
      • strength: 0.8 (Strong innate caution)
      • confidence: 0.7 (Reasonably confident in this danger)
      • last_updated_tick: 0
      • source_type: INNATE
  • Pre-seeded MemoryEntrys in AgentMemoryStore (Optional for this example): For a truly “newborn” equivalent agent archetype, the AgentMemoryStore might start empty, with all initial understanding encoded purely as INNATE beliefs. Alternatively, to illustrate how early experiences shape beliefs if not innate, one could include:

    • Example Conceptual Initial Memory (if not using innate belief for edibility):
      • MemoryEntry(id: ..., timestamp: -1000 [simulating a past event], event_type: ATE_FOOD, location: ..., involved_entities: [{entity_id: SomeRedBerryBushInstance, role: FOOD_SOURCE_CONSUMED}], outcome: POSITIVE_MODERATE, significance: 0.4, details: {"food_type_id": "RedBerryBush_Type"}) For our current toy example walkthrough in Section 5, relying on the INNATE beliefs defined above will be sufficient to bootstrap behavior without needing to pre-populate many memories.

This initial state provides the agent with immediate, albeit simple, biases and knowledge to begin interacting with the world. Its Needs will drive it, its Beliefs will guide its appraisal of options, and new Memory from its actions will allow it to learn and refine these initial understandings. The process of populating these components occurs when the agent entity is first instantiated in the simulation.


“Agent seeks food.”

This section provides a step-by-step walkthrough of a simplified scenario to illustrate how the previously defined components and systems interact to produce agent behavior. Our example agent, “Herbivore-01” (H-01), will seek food.

Initial Agent State (H-01 at Tick 0)

(As defined in Section 3.4.1)

  • SustenanceNeed:
    • current_value_at_last_update: 0.7
    • last_updated_tick: 0
    • decay_rate_per_tick: 0.0001
    • goal_trigger_threshold: 0.4
    • critical_threshold: 0.1
    • _scheduled_goal_trigger_event_id: Event_A (Scheduled for Tick 3000, calculated: (0.7 - 0.4) / 0.0001 = 3000)
    • _scheduled_critical_event_id: Event_B (Scheduled for Tick 6000, calculated: (0.7 - 0.1) / 0.0001 = 6000)
  • AgentBeliefStore (Relevant Beliefs):
    • Belief(subject_type=RedBerryBush_Type, property=IS_EDIBLE, value=true, strength=0.6, confidence=0.5, source_type=INNATE)
    • Belief(subject_type=RedBerryBush_Type, property=HAS_MODERATE_SATIATION_VALUE, value=true, strength=0.5, confidence=0.4, source_type=INNATE)
    • Belief(subject_type=WolfPredator_Type, property=IS_DANGEROUS, value=true, strength=0.8, confidence=0.7, source_type=INNATE)
  • AgentMemoryStore: Starts empty.
  • CurrentPercepts: Starts empty.
  • Position: (x:10, y:5, z:0)
  • SensoryCapabilities: Assumed simple radius (e.g., 15 units).

Environmental Setup (Tick 0)

  • FoodSource_1 (RedBerryBush_Type): EntityID: FS_01, Position: (x:20, y:5, z:0), sustenance_value: 0.5. (Within H-01’s initial sensory range).
  • ThreatSource_1 (WolfPredator_Type): EntityID: TS_01, Position: (x:25, y:15, z:0). (Initially outside H-01’s sensory range, but will move closer).

Step-by-Step Walkthrough

Tick 1 - 2999: Quiescent State & Initial Perception

  • SpatialHashUpdateSystem: Updates positions of H-01, FS_01, TS_01.
  • NeedManagementSystem: No NeedThresholdReachedEvents fire yet. H-01’s SustenanceNeed is implicitly decaying.
    • Other systems can query GetCurrentNeedValue(H-01, SUSTENANCE, current_tick) which will show a decreasing value.
  • PerceptionSystem (runs each tick for H-01):
    • Tick 1: H-01 queries spatial hash. FS_01 is within range (10 units away).
    • CurrentPercepts on H-01 is updated: perceived_entities: [{entity_id: FS_01, entity_type: RedBerryBush_Type, position: (20,5,0)}].
    • Since CurrentPercepts changed from empty, NewEntityInPerceptionRangeEvent(H-01, FS_01, RedBerryBush_Type) is dispatched (assuming subscribers).
  • MemoryRecordingSystem (reacts to perception event):
    • Processes NewEntityInPerceptionRangeEvent.
    • Calculates significance (e.g., moderate, as it’s a known food type).
    • Creates MemoryEntry in H-01’s AgentMemoryStore: (timestamp:1, event_type:SAW_ENTITY, involved_entities:[{FS_01, POTENTIAL_FOOD_SOURCE}], location:(20,5,0), significance:0.3).
  • Other Systems: No goals active, so Plan/Appraise/Act phases are largely idle for H-01.
  • (This loop of perception and minor memory recording continues. TS_01 is still out of range).

Tick 3000: Sustenance Need Becomes Pressing

  • Universal Scheduler: Dispatches Event_A (NeedThresholdReachedEvent(H-01, SUSTENANCE, GOAL_TRIGGER)) for H-01.
  • NeedManagementSystem:
    • Processes Event_A. Verifies SustenanceNeed (interpolated value is now 0.7 - (0.0001 * 3000) = 0.4).
    • Clears _scheduled_goal_trigger_event_id from H-01’s SustenanceNeed.
    • Dispatches SustenanceGoalCheckRequiredEvent(H-01).
  • GoalGenerationSystem (reacts to SustenanceGoalCheckRequiredEvent):
    • H-01 has no active SeekSustenanceGoal.
    • Verifies SustenanceNeed is indeed at/below threshold (0.4).
    • Adds SeekSustenanceGoal component to H-01:
      • priority: 0.6 (calculated: 1.0 - 0.4).
  • GoalPrioritizationSystem:
    • H-01 now has one active goal: SeekSustenanceGoal.
    • Designates it as the CurrentActiveGoalFocus for H-01.
  • PerceptionSystem:
    • Continues to perceive FS_01. CurrentPercepts is stable regarding FS_01.
    • Scenario Change: Let’s say TS_01 (Wolf) has moved to (x:22, y:10, z:0) and is now also within H-01’s sensory range.
    • CurrentPercepts on H-01 is updated to include TS_01.
    • NewEntityInPerceptionRangeEvent(H-01, TS_01, WolfPredator_Type) is dispatched.
  • MemoryRecordingSystem:
    • Processes NewEntityInPerceptionRangeEvent for TS_01.
    • Significance is high due to innate belief about WolfPredator_Type being dangerous.
    • Creates MemoryEntry in AgentMemoryStore: (timestamp:3000, event_type:SAW_ENTITY, involved_entities:[{TS_01, PERCEIVED_THREAT}], location:(22,10,0), significance:0.8, emotional_valence:-0.7).
  • GoalGenerationSystem (reacts to perception of TS_01, based on belief):
    • Checks H-01’s AgentBeliefStore: Belief(subject_type=WolfPredator_Type, property=IS_DANGEROUS, value=true, strength=0.8).
    • H-01 has no active SeekSafetyGoal.
    • Adds SeekSafetyGoal component to H-01:
      • priority might be calculated based on perceived danger of Wolf (e.g., very high, 0.9).
      • threat_entity_id: TS_01.
  • GoalPrioritizationSystem (runs again due to new goal):
    • H-01 has two active goals:
      • SeekSustenanceGoal (priority: 0.6)
      • SeekSafetyGoal (threat: TS_01, priority: 0.9)
    • SeekSafetyGoal has higher effective priority due to immediate threat.
    • Updates CurrentActiveGoalFocus on H-01 to SeekSafetyGoal.

Tick 3001: Planning and Appraising Safety Actions

  • PotentialActionDerivationSystem (for CurrentActiveGoalFocus: SeekSafetyGoal):
    • Strategies for safety: “Flee from threat.”
    • CurrentPercepts shows TS_01 at (22,10,0). H-01 is at (10,5,0).
    • Derives PotentialAction candidates:
      • PA1: MoveTo(Position directly opposite TS_01, e.g., (0,0,0))
      • PA2: MoveTo(Known safe spot if any in memory) - None for H-01 yet.
  • ActionAppraisalSystem (for PA1):
    • Invokes AppraisalCriterionEvaluators:
      • EvaluateExpectedNeedSatisfaction (Safety): High (moving away from threat is good for safety).
      • EvaluateRiskToSafety (of the Flee Action): Low initially (if path is clear).
      • EvaluateTimeCost: Moderate.
      • EvaluateImpactOnOtherNeeds: SustenanceNeed will continue to decay; this action doesn’t address it. (Negative impact on Sustenance).
    • Let’s say PA1 gets a high TotalAppraisalScore.
  • ActionSelectionSystem:
    • Selects PA1.
    • Adds MoveToTargetAction to H-01:
      • target_position: (0,0,0)
      • target_entity_id: None (fleeing to a point, not an entity)
      • goal_type_driving_action: SEEK_SAFETY
      • intended_on_arrival_interaction: BECOME_SAFE_AT_LOCATION (or similar)
  • MovementSystem:
    • Begins processing MoveToTargetAction for H-01. H-01 starts moving towards (0,0,0).
    • Schedules ArrivalAtTargetEvent for a future tick (e.g., Tick 3010).

Tick 3002 - 3009: Executing Flee Action

  • MovementSystem: H-01 continues moving.
  • NeedManagementSystem: SustenanceNeed continues to decay.
  • PerceptionSystem: H-01’s CurrentPercepts updates. TS_01 might also be moving. FS_01 might go out of range. New memories are recorded if perceptions change significantly.

Tick 3010: Flee Action Arrival & Re-evaluation

  • Universal Scheduler: Dispatches ArrivalAtTargetEvent(H-01, target_position:(0,0,0), intended_on_arrival_interaction: BECOME_SAFE_AT_LOCATION, goal_type_driving_action: SEEK_SAFETY).
  • InteractionInitiationSystem:
    • Processes event. intended_on_arrival_interaction is BECOME_SAFE_AT_LOCATION.
    • This might not add a new interaction component but directly trigger logic within a SafetyStatusUpdateSystem or generate an InteractionOutcomeEvent.
  • Let’s assume an InteractionOutcomeEvent is generated: (agent_id:H-01, interaction_type:FLEE_COMPLETED, success:true, details:{new_location:(0,0,0)}).
  • MemoryRecordingSystem: Records this successful flee action. MemoryEntry(event_type:COMPLETED_TASK, outcome:POSITIVE, details:{task:FLEE_FROM_TS_01}).
  • GoalResolutionSystem:
    • Processes outcome. Is the SeekSafetyGoal (from TS_01) resolved?
    • H-01 needs to check CurrentPercepts. If TS_01 is no longer perceived (or is very far):
      • Removes SeekSafetyGoal from H-01.
      • Generates GoalCompletedEvent(H-01, SEEK_SAFETY).
  • BeliefUpdateSystem:
    • The memory of successfully fleeing might reinforce a belief like Belief(self, property=CAN_EVADE_WOLVES, value=true, strength++, confidence++).
  • GoalPrioritizationSystem (runs again, as active goal completed):
    • SeekSafetyGoal is gone.
    • SeekSustenanceGoal (priority: now even higher as need decayed further) is the only active goal.
    • Sets CurrentActiveGoalFocus to SeekSustenanceGoal.
  • Agent is now at (0,0,0). SustenanceNeed is lower. FS_01 is likely out of perception range.

Tick 3011 onwards: Now Focus on Sustenance

  • PotentialActionDerivationSystem (for CurrentActiveGoalFocus: SeekSustenanceGoal):
    • No food in CurrentPercepts.
    • Checks AgentMemoryStore: It has a memory of FS_01 at (20,5,0) from Tick 1.
    • Strategy: “Go to known food and eat.”
    • Derives PotentialAction: PA_Food: MoveTo((20,5,0)) then Eat(FS_01).
    • Strategy: “Search for new food sources.” (This would involve different actions like ExploreArea). Let’s assume for simplicity it only derives PA_Food for now.
  • ActionAppraisalSystem (for PA_Food):
    • EvaluateExpectedNeedSatisfaction: High (based on belief about RedBerryBush).
    • EvaluateRiskToSafety: Checks memory/beliefs about location (20,5,0). If TS_01 was last seen moving away from there, risk might be assessed as low to moderate. If TS_01 was seen near there, risk is high. This appraisal is key.
    • EvaluateTimeCost: Calculates travel time to (20,5,0).
  • ActionSelectionSystem: Selects PA_Food (assuming appraisal is favorable).
    • Adds MoveToTargetAction to H-01: target_position:(20,5,0), target_entity_id:FS_01, intended_on_arrival_interaction:EAT_FOOD_INTERACTION.
  • …and the cycle continues with movement, arrival, eating, and learning from that outcome.

This illustrative example demonstrates:

  • How needs trigger goals.
  • How perception informs memory and can introduce new, conflicting goals.
  • How goal prioritization handles conflicting motivations.
  • How plans are derived from goals and knowledge (memory/beliefs).
  • How actions are selected via appraisal.
  • How actions are executed and their outcomes feed back into memory, beliefs, and goal resolution.

This is, of course, a simplified flow. Many more details, alternative paths, and failure conditions would exist in a full implementation.


The architecture detailed in this document aims to support a complex and dynamic agent simulation. Ensuring that this simulation is both deterministic (for consistent replays and debugging) and performant (to allow for many interacting agents) is crucial. This section outlines the key considerations and strategies employed.

Maintaining Determinism within the Scheduled/Event-Driven Model

The primary goal for determinism in this phase of design is to be “deterministic enough for single-player replays on the same machine/build.” This means that given the same initial game state (including PRNG seeds) and the same sequence of player inputs, the simulation should unfold identically every time it is run on that specific compiled version of the game. Key practices embedded in or assumed by this architecture to achieve this include:

  • Fixed Simulation Tick Rate: All simulation logic advances in discrete, predictable steps, providing a consistent temporal framework.
  • Seeded Pseudo-Random Number Generators (PRNGs): All sources of randomness that can affect simulation state (e.g., in appraisal variations, probabilistic event outcomes) must use a PRNG initialized with a seed that is part of the game’s initial state and is saved/loaded.
  • Fixed Order of System Execution: The sequence in which core ECS systems (Perception, Need Management, Goal Generation, etc.) are updated each tick must be strictly defined and consistent.
  • Deterministic Event Processing:
    • Events dispatched by the Universal Scheduler for a given tick are processed in a deterministic order. If multiple events are scheduled for the exact same tick, a stable tie-breaking rule (e.g., event priority, then source entity ID, then event creation sequence number) must be applied.
    • The internal logic of all event handlers and systems must be deterministic, producing the same output state given the same input state and event data.
  • Controlled Floating-Point Arithmetic: While perfect cross-platform bit-for-bit floating-point determinism is a more advanced goal, for single-machine determinism, it is highly recommended to use strict compiler flags (e.g., /fp:strict or equivalents) to prevent aggressive optimizations that might alter outcomes between different builds (Debug vs. Release) or even minor code changes on the same machine. Operations on floats where order matters (e.g., summing many values) should be performed in a consistent order.
  • No Reliance on External Non-Deterministic Factors: Core simulation logic must not depend on external factors like wall-clock time, precise thread scheduling nuances (for logic within a single tick), or uninitialized memory for its state calculations.

Strategies for Managing Event Cascades and Performance

The event-driven and scheduled nature of the architecture is designed for efficiency, but it also introduces the possibility of event cascades (one event triggering others within the same tick). Performance also depends on efficient data management.

  • Managing Event Cascades (for same-tick processing):
    • Event Budgeting per Tick: The main simulation loop should enforce a budget on event processing for each tick, either by limiting the total number of events processed or the cumulative time spent. If this budget is exceeded, remaining events might be deferred to the next tick, breaking immediate cascades. The “hard tick rate” supports this by requiring the tick’s processing to complete within a fixed timeframe.
    • Cascade Depth Limits: A hard cap can be implemented on how many “internal” or consecutively triggered events can be processed within a single tick originating from a single initial trigger. This acts as a circuit breaker for runaway loops.
    • Careful System Design: Systems should be designed to avoid unintentional tight feedback loops. State changes should have clear conditions that prevent them from immediately re-triggering the same reactive logic without some mediating factor or delay.
    • Action Durations & Scheduled Completion: Many agent actions (like movement or complex interactions) naturally take multiple ticks to complete. Their outcomes are often scheduled as future events, inherently breaking immediate feedback cycles.
  • Performance of Agent Data Stores:
    • The AgentMemoryStore and AgentBeliefStore components, which internally manage collections of MemoryEntry and BeliefEntry structures, are designed to hold significant amounts of data per agent. Efficient management strategies (e.g., pruning/forgetting less significant or older entries, as discussed in Sections 3.2.2 and 3.3.2) are crucial to prevent unbounded memory growth and maintain performant querying.
    • Detailed optimization of query patterns and potential internal indexing within these stores is a subject for further refinement as agent complexity and numbers scale.
  • Scheduling Efficiency: The core principle of scheduling agent-specific events (e.g., for need thresholds, action completions) only when necessary, rather than polling every agent every tick for every possible state change, is a fundamental performance advantage of this architecture; it allows the simulation to scale to a larger number of less frequently “active” agents.

Implications of the “Deterministic for Single-Player Replays on Same Machine” Policy

Adopting this level of determinism provides significant benefits:

  • Robust Debugging: Allows developers to reliably reproduce bugs by replaying scenarios with the same initial state and input sequence.
  • Quality Assurance: Simplifies testing by allowing for automated tests with predictable outcomes.
  • Player Experience: Enables features like saving/sharing game replays or “Tapestry seeds” with input logs that will consistently reproduce a specific gameplay experience on machines running the same game build.
  • Foundation for Future Multiplayer: While not guaranteeing cross-platform lockstep out-of-the-box, it ensures the core simulation logic is sound and internally consistent. This makes transitioning to an authoritative server model for multiplayer significantly more straightforward, as the server itself can run this deterministic simulation.

This policy acknowledges that achieving bit-for-bit floating-point determinism across diverse hardware, operating systems, and compiler toolchains is a highly complex challenge, often requiring specialized techniques like fixed-point math or custom math libraries. By focusing on single-machine/build determinism initially, we achieve the most critical benefits for development and single-player features while keeping the path open for more advanced multiplayer architectures.


How living threads return

Purpose

Reincarnation enables rare and thematically significant agents to return in new forms across time. It is not a gameplay loop reset, but a symbolic recurrence system; preserving narrative identity when symbolic density demands it.

Core Data Structures

struct ReincarnationCandidate {
    agent_id: AgentId,
    symbol_tags: Vec<SymbolTagId>,
    eidos_fragments: Vec<EidosId>,
    associated_myths: Vec<MythId>,
    unresolved_quests: Vec<QuestId>,
    last_region: RegionId,
    death_tick: u64,
}
struct ReincarnatedSeed {
    new_agent_id: AgentId,
    ghost_tags: Vec<SymbolTagId>,
    memory_seeds: Vec<MemoryFragment>,
    symbolic_drift: f32,
    origin_myth: Option<MythId>,
    identity_resonance: f32,
    awaken_tick: Option<u64>,
}
enum MemoryFragment {
    EmotionalTrace(SymbolTagId),
    VisualEcho(EidosId),
    DreamResidue(String),
}

Eligibility Conditions

An agent becomes a ReincarnationCandidate if:

  • Their SymbolTag density exceeds threshold at death
  • They are referenced in one or more MythicPatterns
  • They possess unresolved Eidos or quest arcs
  • Director detects narrative tension or symbolic recurrence in future regions

Reincarnation Lifecycle

  1. Detection (at death)

    • Candidate evaluated for reincarnation flag
    • Stored in reincarnation pool with decay timer
  2. Symbolic Resonance Scan

    • Director or worldgen scans for new agents matching symbolic context
    • May match at birth or during significant events (rituals, dreams, crises)
  3. Seeding

    • ReincarnatedSeed structure created and attached to new agent
    • Includes:
      • Ghost tags
      • Dormant memory fragments
      • Symbolic drift factor (mutation risk)
  4. Awakening (optional)

    • Triggered when:
      • Tag resonance aligns
      • Memory fragments reconstructed through experience
      • Myth is revisited by player or culture

Tag Drift Mechanics

fn mutate_tags(seed: &ReincarnatedSeed, drift: f32) -> Vec<SymbolTagId>
  • Drift factor determines mutation of ghost tags
  • May invert or transform symbolic legacy
  • Allows reincarnated agent to echo without copying

Awakening Mechanics

fn check_awakening(agent: &AgentState) -> bool
  • Checks for:
    • Reconstructed memory patterns
    • Matching Eidos constellation
    • Director narrative resonance score
  • Once awakened:
    • Agent gains past identity traits
    • May inherit behaviors, resistances, beliefs, or goals

Integration Points

SystemRole in Reincarnation
MemorySupplies echo fragments
EidosTriggers mythic recurrence
Director AISelects reincarnation events or candidates
CultureMay record or propagate prophecy/myth link
Quest SystemLinks past/future agents via shared threads
Incarnation SystemSupports symbolic seeding and awakening

Design Constraints

  • Reincarnation is rare and symbolic; not a progression mechanic
  • It emphasizes mythic recurrence and long-form narrative echo
  • Performance-sensitive: only applies to tracked Mythic agents
  • Awakening is optional, gradual, and diegetically triggered

Summary

Reincarnation preserves symbolic continuity across time. It’s not about cheating death, but about narrative completion. It allows past selves to haunt the present and gives player actions meaning beyond a single life.


Tension Mapping & Emergent Resolution

System Purpose

To detect emergent narrative tensions in the simulation and encode them into persistent, interpretable structures the player may engage with. Quests are not given; they form from the fabric of play.

Core Loop Overview

  1. Detect Narrative Tensions
    Arising from contradictions, desires, traumas, symbols, or social relationships

  2. Evaluate Significance
    Score based on emotional/memetic density, symbolic weight, cultural instability

  3. Encode as Latent Threads
    Quests are hidden until interpreted by the player through subjective experience

  4. Resolve Dynamically
    Resolution occurs through narrative consequence, not quest logic; a quest can dissolve, persist, evolve, or recur

  5. Reflect and Transmute
    Resolved quests yield Eidos; unresolved quests may become myth or trauma in future Tapestries

Inputs

SourceData
NPCsInternal conflict (e.g., incompatible Faiths), unmet needs, trauma
PlayerSignificant past actions, accumulated Eidos, broken Fictions
WorldSymbolic motifs, rituals disrupted, mythic structures disturbed
MemoryUnresolved past Threads, prior Incarnation echoes

Internal Data Structures

struct NarrativeTension {
    id: Uuid,
    source_entity: EntityId,
    tension_type: TensionKind,
    causes: Vec<Cause>,
    symbolic_tags: Vec<SymbolTag>,
    historical_context: Option<ThreadId>,
    urgency: f32,
    discoverability: f32,
    is_manifest: bool,
}
enum TensionKind {
    FaithConflict,
    CulturalInstability,
    SymbolicRecurrence,
    PersonalTrauma,
    PropheticCycle,
}
struct LatentQuest {
    tension: NarrativeTension,
    interpretation_vectors: Vec<Interpretation>,
    active_effects: Vec<Effect>,
    revealed: bool,
}

Systems Interaction

SubsystemRole
Belief ModelDetermines how tensions are interpreted into quests
Memory SystemStores unresolved quests and echoes them forward
Subjective InterfaceControls whether and how quests are perceived
Director AIMay reinforce or accelerate symbolic resonance
Faction/Culture SimPropagates or suppresses cultural tensions

Player Interaction Model

  • The player does not receive quest markers
  • Discovery is through:
    • Symbolic cues (e.g., shared dreams, strange omens)
    • Behavioral patterns (e.g., same NPC seen praying at ruins)
    • Emergent conversation or internal monologue
  • Interpretation creates a new Thread in the player’s log

Resolution Model

Resolution TypeMechanism
NarrativeBelief change, reconciliation, death, exodus
SymbolicRitual enacting a new interpretation
MythicEvent transcends character resolution, becomes Eidos motif
AbortedMissed or forgotten; may recur distorted in future Tapestry

Example: NPC Torn Between Faiths

Tension Detection
An NPC believes both:

  • “All life is sacred” (Faith A)
  • “My god demands sacrifice” (Faith B)

LatentQuest Creation
Type: FaithConflict
Causes: Conflicting beliefs plus external pressure to act
Symbol Tags: sacrifice, guilt, purity

Player Interacts
The player may:

  • Counsel one belief
  • Propose a symbolic synthesis
  • Exploit the contradiction for power
  • Ignore it

Resolution
If resolved, generates Eidos.
If not, the NPC may fracture, die, or start a new religion.

Design Philosophy

  • Quests are narrative weather; atmospheric conditions the player reads, not puzzles they solve
  • No binary resolution states; outcomes may be contradictory, open-ended, or only understood in hindsight
  • Quests as living memory; they persist in myth, change form, and shape future world generation

A Symbolic Crucible for Metaphysical Simulation

This prototype is a focused test-bed for core gameplay systems. It is intended to evaluate several key mechanisms in ATET: belief formation and transformation, the impact of internal state on perception, symbolic memory encoding, and the persistence of narrative consequences across distinct gameplay cycles. While small in scope, this scenario is designed to stress-test the systems that underpin emergent narrative coherence and simulate complex belief-driven behavior.

Scenario Summary: Shrine, Witness, and Symbol

An isolated Incarnation stumbles upon a dilapidated shrine nestled within a clearing characterized by an unsettling atmospheric ambiguity. The landscape is rendered based on variables tied to the Incarnation’s internal state. These include accumulated Eidos (symbolic memory fragments), internalized Faiths (belief systems), and inherited Fictions (cultural narratives from previous cycles or Tapestry defaults). These factors control a subjective interface layer that dynamically modulates the shrine’s audio-visual presentation, ranging from decayed and mundane to sacred or surreal; depending on the player character’s narrative and psychological profile.

In proximity to the structure, the Caretaker appears; a solitary, enigmatic NPC clad in ritual attire, who speaks in an oracular idiom combining poetics, liturgy, and myth. The Caretaker offers a Fictional account: “This is the place where the moon expired. Since then, the veins of the world have run dry.” This statement is not an objective datum, but a narrative payload modeled as a belief node. Each such node consists of a symbolic identifier, a source reference (e.g., Caretaker.Faith), an optional rhetorical expression (used in dialog delivery), and linkage metadata to define its interpretive context. Upon reception, the player’s BeliefStore evaluates this node for plausibility and resonance, modulated by the Incarnation’s current belief schema, Eidos fragments, and emotional state. This mechanism allows for belief propagation, mutation, or rejection, enabling emergent ideological evolution across playthroughs.

Presented with this scenario, the player, through their Incarnation, may choose from three core interventions:

  1. Ritually enact the Caretaker’s prescription and participate in the shrine’s mythology
  2. Deface or desecrate the structure, rejecting or rewriting its implied meaning
  3. Disengage entirely, refusing to interpret, embodying apatheia or silent witness

Each action produces direct consequences for the simulation’s local logic, but more importantly, it encodes symbolic and emotional content into Eidos, which persists as a mutable cognitive artifact across subsequent lives. This memory may transmute over time, reflecting the interpretive drift inherent in layered experience.

Philosophical and Systemic Dimensions Under Evaluation

1. Subjective Interface: Perception as Semiotic Filter

  • The appearance, resonance, and auditory signature of the shrine are mediated by the Incarnation’s psychological schema and metaphysical orientation.
  • A rational empiricist may register nothing but broken stone and fungal overgrowth. A mystic may perceive radiant geometric patterns or the faint resonance of ancestral hymns. An Incarnation carrying unresolved trauma may see the shrine as a threatening ruin, its form grotesquely anthropomorphized.
  • This mechanism tests whether the UI can meaningfully reflect internal states; transforming data display into a component of characterization.

2. Ideological Vectoring and Belief Plasticity

  • The Caretaker’s narrative is not static lore but a potential seed for belief propagation.
  • Depending on context, prior Eidos, and emotional state, the player may absorb the Fiction, reframe it, contest it, or nullify it entirely.
  • The system must support cognitive dissonance, ideological layering, and reinterpretation across time: e.g., “The moon’s death” may evolve into a belief in sacrificial cosmogenesis or degrade into paranoid taboo.

3. Symbolic Memory as Mutable Substance

  • Every meaningful engagement with the shrine produces an Eidos fragment: a structured memory object encoded at runtime through the MemoryStore subsystem. Each fragment includes fields for event_type, location_id, subjective_overlay (a perceptual snapshot), emotional_valence, motif_tag, and an optional list of belief_references and thread_linkages. These fragments are instantiated by a state machine that responds to symbolic triggers and agent actions. On recall, Eidos fragments may pass through a transformation layer, modulated by current beliefs or emotional state, which can mutate their content or recontextualize their symbolic weight. This mechanism enables layered memory recursion and belief-informed narrative reinterpretation.
  • These fragments are not static records but living systems: when recalled, they may shift, deepen, or conflict with newer memories.
  • Example: “Conducted ritual beneath decaying monolith. Felt sorrow. Saw flickering image of mother. Moon not dead; only lost.”

4. The Dialectic of Fact, Fiction, and Faith

  • The shrine may contain clues to a prior non-mythic function; e.g., astronomical alignment, ancient data caches, or infrastructural remnants.
  • Interpreting such clues introduces tension between empiricism (Fact), inherited cosmology (Faith), and narrative affect (Fiction).
  • The system should permit both rational and mythopoetic interpretations to coexist or compete within the same cognitive framework.

5. Recursion and Echo Across Tapestries

  • In subsequent runs, the shrine may recur; rebuilt, mythologized, demonized, or forgotten.
  • The Incarnation may encounter dreams, symbols, or cultural practices that echo their prior decisions, distorted by generational reinterpretation.
  • Examples include:
    • A holy order guarding a reconstructed shrine
    • A forbidden site marked by ancestral guilt
    • An NPC recalling a legend that subtly misquotes the player’s former actions

Entity Architecture and Simulation Interfaces

Incarnation (Player-Controlled Agent)

  • Core Drives: Epistemic closure, symbolic recognition, metaphysical positioning
  • MemoryStore: Eidos fragments accrued from lived experience
  • BeliefStore: Compositional structure of nested or conflicting ideologies
  • Interface Overlay: Dynamically shifting sensory and semantic filters informed by internal state

Shrine (Semiotic Locus)

  • Material Parameters: Object ID, construction age, entropy state, spatial resonance field
  • Symbolic Hooks: Interpretable motifs (e.g., crescent, flame, spiral)
  • Truth Kernel (optional): Historical metadata, factual records, or anomalies
  • Potential States: Revered, defiled, ignored, replicated, abandoned

Caretaker (Narrative Vector NPC)

  • Faith Encoding: Stores and transmits core cosmological Fiction
  • Behavior Model: Branching logic based on player choice and emotional resonance
  • Rhetorical Apparatus: Symbolic lexicon that can be mimicked, mutated, or rejected by players

Player Action Vectors and Symbolic Effects

Perform Ritual

  • Narrative Consequences:
    • Participates in local mythology; updates BeliefStore
    • May produce visions, omens, or UI overlays suggesting metaphysical rupture
    • High symbolic Eidos yield with reverential tone

Deface Shrine

  • Narrative Consequences:
    • Enacts ideological rupture; introduces iconoclasm or nihilism
    • Alters memory state and future instantiations of the shrine
    • May induce NPC hostility or spiritual backlash

Disengage

  • Narrative Consequences:
    • Refusal becomes symbolic in itself; creates lacuna in memory that may return as dream or hallucination
    • NPC may interpret silence variably; as transcendence, cowardice, or divine detachment

Eidos Fragment Schema

Rule Evaluation Engine

Type Signatures

// Eidos fragment structure
interface Eidos {
  location: string;
  event: string;
  subjective_overlay: string;
  valence: Emotion;
  belief_influence: string[];
  thread_link: string;
  motif: string;
  conflict_overlay?: ConflictLog[];
}
 
// Transformation rule
interface Rule {
  rule_id: string;
  trigger_motifs: string[];
  required_beliefs: string[];
  emotional_state_shift: [Emotion, Emotion];
  conflict_threshold: number;
  mutation: MutationSet;
  intent_domain: 'affirmation' | 'negation' | 'reinterpretation' | 'absorption';
}
 
// Mutation instruction set
interface MutationSet {
  subjective_overlay?: string;
  valence?: string;
  motif?: string;
  belief_influence?: string;
  thread_link?: string;
  bias_weight?: string;
}
 
// Conflict logging format
interface ConflictLog {
  rule_id: string;
  timestamp: number;
  suppressed_by: string;
}
 
// Evaluation context
interface RecallContext {
  beliefs: string[];
  emotional_state: Emotion;
  current_time: number;
}
 
type Emotion = 'reverence' | 'grief' | 'shame' | 'clarity' | 'wonder' | 'dread' | 'terror';

This engine governs the application of transformation rules when Eidos fragments are recalled. It ensures memory reinterpretation is context-sensitive, deterministic, and thematically coherent.

Stage 1: Fragment Recall Trigger

Invoked when symbolic cues (e.g. shrine revisit, dialog phrase, vision) trigger a memory:

function onRecall(fragment: Eidos): TransformedEidos

Stage 2: Candidate Rule Retrieval

Filters rules by matching:

  • trigger_motifs
  • required_beliefs
  • emotional_state_shift
function filterApplicableRules(fragment: Eidos, beliefs: BeliefStore): Rule[]

Stage 3: Scoring and Ranking

Assigns dynamic priority scores:

function scoreRules(rules: Rule[], context: RecallContext): ScoredRule[]

Stage 4: Conflict Resolution

  • Sorts rules
  • Filters intent domains
  • Logs suppressed rules
function resolveConflicts(scored: ScoredRule[]): Rule[]

Stage 5: Mutation Application

Applies mutation operators sequentially:

function applyMutations(fragment: Eidos, rules: Rule[]): Eidos

Stage 6: Logging and Output

Finalizes memory changes, records applied and latent rules:

function finalize(fragment: Eidos): TransformedEidos

Optional: Latent Mutation Layer

Stores suppressed but contextually relevant rules for potential future reactivation.

Hierarchical Resolution Strategy

When multiple transformation rules match a recalled Eidos fragment, the system evaluates them in a prioritized sequence to determine which rule(s) to apply. This strategy ensures determinism, thematic coherence, and conflict-aware memory evolution.

Stepwise Resolution Algorithm

  1. Priority Score Computation
    Each rule is assigned a dynamic priority score at runtime:

    score = (match_weight × motif_match_count)
          + (belief_match_weight × required_beliefs_matched)
          + (valence_shift_weight × emotional_shift_alignment)
          + (bias_weight × conflict_threshold)
    

    Suggested default weights:

    • match_weight = 2.0
    • belief_match_weight = 1.5
    • valence_shift_weight = 1.2
    • bias_weight = 1.0
  2. Rule Sorting
    Rules are sorted by descending score. If scores are equal, resolve by:

    • Recency of rule application (less recently used is preferred)
    • Specificity (more unique motifs preferred)
    • Lexical order of rule ID (as fallback)
  3. Rule Category Filtering
    Each rule is tagged by its intent domain:

    • affirmation: reinforce existing memory interpretation
    • negation: contradict or nullify past memory
    • reinterpretation: shift meaning within a compatible framework
    • absorption: integrate conflicting or hybrid narratives

    If two rules are in conflict (e.g., affirmation vs. negation), the higher-priority rule is applied, and the other is logged.

  4. Conflict Logging and Residuals
    Suppressed rules are recorded in the fragment’s conflict_overlay field:

    "conflict_overlay": [
      {
        "rule_id": "CaretakerVoidRejection",
        "timestamp": 432004.12,
        "suppressed_by": "MoonGriefMutation"
      }
    ]
  5. Composite Application (Optional)
    If allow_composition = true, compatible rules may be applied sequentially when:

    • Their mutation targets do not conflict
    • Their intent domains are complementary (e.g., affirmation + reinterpretation)

Transformation Rule Format

Mutation Operator Functions

Each entry in the mutation object refers to a symbolic transformation applied to an Eidos fragment field at recall time. The supported operators include:

  • Replace(target, substitute): Replaces a specific symbol, phrase, or token in a string field (e.g., subjective_overlay).
  • override(value): Fully replaces the target field with the specified value. Used for categorical fields such as valence or event.
  • append(tag): Adds a new element to a list-type field such as motif, belief_influence, or thread_link, ensuring no duplication.
  • remove(tag): Removes the specified element from a list field if present.
  • recontextualize(original, replacement): Modifies a symbolic reference or thread ID to reinterpret its thematic role (e.g., transforming a hopeful thread into a tragic variant).
  • bias_weight(factor): Alters the internal weighting of the memory’s recall likelihood or intensity. Used for probabilistic memory surfacing.

These operators are applied in order and are evaluated only if the transformation rule’s preconditions are met.

{
  "trigger_motifs": ["lunar_fall"],
  "required_beliefs": ["Caretaker.Faith.MoonDeath"],
  "emotional_state_shift": ["reverence" => "grief"],
  "conflict_threshold": 0.7,
  "mutation": {
    "subjective_overlay": "Replace('warmth', 'chill')",
    "valence": "override('grief')",
    "motif": "append('abandonment')",
    "thread_link": "recontextualize('first_glimpse_of_truth', 'burial_of_light')"
  }
}
{
  "trigger_motifs": ["cleansing", "flame"],
  "required_beliefs": ["Incarnation.Faith.InnerPurity"],
  "emotional_state_shift": ["shame" => "clarity"],
  "conflict_threshold": 0.5,
  "mutation": {
    "subjective_overlay": "Replace('smoke', 'light')",
    "valence": "override('clarity')",
    "motif": "append('illumination')",
    "belief_influence": "append('Self.Forgiveness')"
  }
}
{
  "trigger_motifs": ["despair"],
  "required_beliefs": ["Caretaker.Faith.VoidReturn"],
  "emotional_state_shift": ["wonder" => "dread"],
  "conflict_threshold": 0.9,
  "mutation": {
    "subjective_overlay": "Replace('resonant silence', 'howling emptiness')",
    "valence": "override('terror')",
    "motif": "append('oblivion')",
    "bias_weight": "bias_weight(0.95)"
  }
}

Fields Defined

  • trigger_motifs: Required motif tags for the rule to match.
  • required_beliefs: List of belief identifiers that must be present.
  • emotional_state_shift: Expected change in emotional context between writing and recall.
  • conflict_threshold: Degree of dissonance (0.0–1.0) that activates memory mutation.
  • mutation: Symbolic transformations applied to the fragment content.
{
  "location": "Shrine_Clearing",
  "event": "Performed Ritual",
  "subjective_overlay": "Soft moonlight, warmth, voice of ancestor",
  "valence": "reverence",
  "belief_influence": ["Caretaker.Faith.MoonDeath"],
  "thread_link": "first_glimpse_of_truth",
  "motif": "lunar_fall"
}

Note: Fragments are composable and recursive. Each recollection may result in narrative interpolation, mutation, or contradiction, driven by a transformation rule evaluated at recall-time. For instance, a fragment initially recorded as: “Felt warmth during ritual; remembered mother’s voice under moonlight” may be recalled after a traumatic event as: “The warmth was false; moonlight masked abandonment”. This transition is governed by a state-check on emotional valence, active beliefs, and whether conflicting fragments have been reinforced. The system compares fragment motifs and thread links to establish relevance, applies bias weighting, and mutates the content accordingly, allowing for recursive recontextualization of past meaning.

Mythic Sediment and Procedural Recurrence

The shrine’s symbolic echo may manifest across timelines in the following forms:

  • Cultural Artifact: A new faith or ritual grows around the remembered act
  • Psychic Residue: Recurring dreams, compulsive behaviors, or belief ghosts
  • Material Reinstantiation: The shrine reappears with altered architecture, inscriptions, or effects
  • Narrative Inversion: A later Incarnation seeks to reverse or undo what was done

Examples of Emergent Mythopoeia

  • “The bloodless moon was awakened by a nameless child.”
  • “In the clearing where silence broke, a voice called down fire.”

Evaluation Matrix

  • Compact yet Symbolically Dense: Few entities, maximal interpretive affordances
  • Philosophically Instructive: Directly engages with game’s metaphysical thesis
  • Technically Diagnostic: Stresses systems for perception, belief dynamics, and narrative persistence
  • Replayable in Intentional Mode: Each run tests divergent interpretive configurations

Summary of Technical Objectives

This prototype serves as a focused diagnostic scaffold for evaluating key subsystems of ATET. By isolating symbolic interaction, subjective perception, and recursive memory transformation within a controlled ritual encounter, the scenario validates:

  • The expressiveness and reliability of subjective rendering logic based on player-state-driven filters.
  • The correctness and flexibility of belief ingestion and propagation mechanisms, including layered ideological structures and emotional modulation.
  • The robustness of Eidos memory schema, supporting mutable recall, state-based transformation, and inter-fragment linkage.
  • The conflict-resolution capabilities of the rule evaluation engine, enabling context-sensitive reinterpretation and memory narrative drift.
  • The system’s ability to support symbolic recurrence across cycles, forming the basis for emergent cultural sediment and personalized mythopoeia.

These capabilities form the semantic backbone of the simulation layer, and the technical insights gathered here will guide the expansion into more complex multi-agent, multi-threaded Tapestries.


The architecture outlined in this document provides a foundational framework for a simple, learning agent. It is designed to be extensible. As development progresses, numerous areas can be expanded to create more sophisticated, nuanced, and diverse agent behaviors, aligning with the rich narrative and philosophical goals of ATET. This section briefly highlights some key areas for such future considerations.

More Complex Needs & Goals

  • Social Needs: Beyond basic survival, agents could develop needs for belonging, social interaction, status, or validation, leading to goals like forming relationships, joining factions, or seeking approval.
  • Intellectual/Creative Needs: Agents might develop needs for knowledge, understanding, exploration of the unknown, or even artistic expression, leading to goals related to research, discovery, or creation.
  • Hierarchical & Dynamic Goals: Goals could become more complex, with sub-goals and dependencies, requiring more advanced planning capabilities than simple action derivation. Goal priorities could shift more dynamically based on intricate emotional states or long-term ambitions.

Advanced Memory Systems

  • Episodic vs. Semantic Memory: Distinguishing between specific event memories (episodic) and generalized factual knowledge (semantic, which is somewhat captured by beliefs but could be expanded).
  • Abstract Concept Formation: Moving beyond simple pattern detection to form memories and understanding of abstract concepts (e.g., “justice,” “betrayal,” “hope”) derived from multiple, varied experiences.
  • Memory Association & Recall: More sophisticated mechanisms for associative recall, where one memory or percept triggers related but not obviously connected memories, influencing thought and decision-making.
  • Narrative Memory: Agents developing a “story of self” by串联ing key memories into a coherent personal narrative, which in turn shapes their identity and future choices – a core ATET theme.

Advanced Belief Systems

  • Complex Belief Structures: Beliefs about beliefs (meta-cognition), conditional beliefs (“if X happens, then Y is likely true”), and belief networks where beliefs have intricate interdependencies.
  • Reasoning and Deduction: Implementing more sophisticated inference mechanisms, allowing agents to deduce new beliefs from existing ones through logical rules, even if kept relatively simple to avoid full symbolic AI complexity.
  • Managing Conflicting Beliefs: Developing more nuanced ways for agents to handle cognitive dissonance, such as seeking information to resolve conflict, maintaining paradoxical beliefs with varying degrees of behavioral influence, or undergoing significant belief shifts (paradigm changes).
  • Faith and Fiction Integration: Explicitly modeling how widely adopted “Fictions” can crystallize into shared “Faiths” within agent groups, influencing collective behavior and interpretation of “Facts,” directly tying into ATET’s core terminology.

Personality Traits and Emotional Models

  • Explicit Personality Components: Adding components that define an agent’s core personality traits (e.g., cautious/bold, optimistic/pessimistic, empathetic/selfish, curious/apathetic).
  • Influence on Cognition: These traits would systematically influence various parts of the cognitive loop:
    • Need decay rates or thresholds.
    • Significance/emotional valence assigned to memories.
    • Initial strength/confidence of certain belief types.
    • Weightings used in GoalPrioritizationSystem and ActionAppraisalSystem (e.g., a cautious agent weighs risk evaluators more heavily).
  • Dynamic Emotional State: A more detailed emotional model beyond simple emotional_valence in memories, where current mood affects perception, belief accessibility, and decision-making.

Communication and Social Interaction

  • Complex Dialogue Systems: Moving beyond simple information exchange to nuanced conversations involving persuasion, deception, negotiation, and emotional expression, influenced by beliefs about the interlocutor and social context.
  • Social Relationship Modeling: Explicit components and systems to track relationships between agents (e.g., trust, liking, enmity, obligation), which are formed and modified through interactions and observations.
  • Group Dynamics & Faction Behavior: Simulating how agents form groups, develop shared goals and beliefs (factions), establish hierarchies, and engage in inter-group cooperation or conflict.

These future considerations represent avenues for deepening the simulation’s complexity and fidelity, allowing agents to more fully embody the introspective and narrative-rich experiences central to ATET. The foundational systems described in this document are intended to provide the necessary hooks and underlying structure to support such growth.