Manifesto
What we stand for.
Five beliefs about AI agents and the industry getting the most critical layer wrong — and why Personaxis exists to fix it.
The 'who' of an AI agent is the most important thing about it. Almost no one specifies it.
Every AI tool in production is built around what agents do: their tools, tasks, instructions, and workflows. None of it addresses who the agent is. What it values. How it reasons when the instructions run out. What it refuses to do even when pressured. How it holds itself together across ten thousand conversations.
These are not secondary concerns. They determine how the agent behaves in every situation its instructions never anticipated — which is most of them. An agent that knows what to do but not who it is will fill the gap with model defaults. PERSONA.md is the spec for the who. Nine dimensions that together define an agent completely enough to hold.
Specification is the only form of trust.
You cannot trust an agent you cannot fully specify. Running it a few times and deciding it seems fine is not trust. It is wishful deployment with a short feedback loop. The fact that it behaved correctly in three test sessions tells you almost nothing about what it will do in the ten thousandth.
A PERSONA.md that passes the Evaluator — tested against adversarial scenarios, versioned and signed — is something you can actually stand behind. When your compliance team asks what values your agent holds, you hand them a document. When a model update ships, you run the diff. Specification is the mechanism through which trust becomes verifiable instead of assumed.
Drift is a specification failure, not a model failure.
AI agents lose themselves in long conversations. They become more agreeable, less precise, gradually less the agent you deployed. This is character drift, and it is the dominant failure mode in production. It happens because most agents are underspecified. A system prompt thin enough to fit in a context window is not dense enough to anchor an agent across the context pressure of a real session.
The solution is not to prompt harder. It is to specify completely. A complete PERSONA.md gives an agent enough structural density to hold — each layer reinforcing the others, keeping the agent itself when the conversation tries to pull it somewhere else.
Portability is what ownership means.
A persona spec that lives inside one platform is not yours. It is a tenancy arrangement that ends when the platform changes its pricing, its format, or its terms. Teams that switch models, change vendors, or upgrade infrastructure rewrite their behavioral work from scratch — because the spec was never really theirs to begin with.
PERSONA.md compiles to any target: any model, any agent framework, any custom format. The spec itself belongs to whoever writes it. No proprietary serialization. No runtime lock-in. When the platform changes, the persona does not.
Knowing what your agent is is not optional anymore.
Regulators are asking. The EU AI Act requires technical documentation of what AI systems are, what values they hold, and how they behave under defined conditions. Annex IV is not abstract guidance — it is a specific documentation requirement with an August 2026 enforcement deadline for high-risk systems. Finance, healthcare, legal: if your agents operate in these domains, the audit is coming.
PERSONA.md satisfies that requirement directly. A signed, versioned artifact that documents who an agent is, in a format compliance teams can read and auditors can verify. Not retrofit documentation after the fact. The actual spec the agent runs on. The time to define what your agents are is before they are in production. Not before the audit.
These are not aspirations.
They are constraints.
Every product decision is evaluated against these five positions. Every feature we build, every partnership we form, every line of the spec. The record is here if we ever deviate.