AuraCoreCF: a local‑first cognitive runtime (not another chatbot wrapper)
Most “AI agents” today are just chatbots with longer prompts and a vector DB bolted on the side. They feel smart for a few turns, then forget you, lose the plot, or hallucinate their own state. O...

Source: DEV Community
Most “AI agents” today are just chatbots with longer prompts and a vector DB bolted on the side. They feel smart for a few turns, then forget you, lose the plot, or hallucinate their own state. Over the last months I’ve been building something different: AuraCoreCF, a local‑first cognitive runtime that treats the language model as the voice, not the mind. The “mind” is an explicit internal state engine that lives outside the model and persists over time. What Aura actually does Aura runs alongside your local LLM (e.g., Ollama) and keeps a continuous internal state across sessions instead of stuffing more tokens into a prompt and hoping. Under the hood it maintains seven activation fields (attention, meaning, goal, trust, skill, context, identity), each as a 64‑dimensional vector that evolves over time. On every cycle, a small salience resolver decides what actually matters right now based on recency, momentum, and relevance, then builds a field‑weighted system prompt for the mode