ArchitectureTrust and Security

The Identity Model

Deep dive into IAM design decisions and their implications.

Why Identity Is Hard in AI Systems

Traditional access control assumes a simple model: a human authenticates, receives a role, and performs actions. The system checks permissions against the role. Done.

AI operating systems break this model. In Poliglot, a human starts a conversation, but RARS executes the work. RARS activates matrices from different vendors. Each matrix has its own agent identity. Actions invoke other actions across matrix boundaries. A single user utterance can trigger a chain of operations spanning multiple domains, multiple agents, and multiple systems of record. Who is "the caller" at any given point? Whose permissions apply?

The identity model is designed to answer these questions clearly and predictably.

Process Identities, Not Autonomous Actors

Every matrix declares an agent: the process identity under which its operations execute. When RARS runs an action from your matrix, the resulting process runs under your agent's identity. This is analogous to a service account in cloud infrastructure: a named identity that code runs as, not a person making decisions.

The agent doesn't choose what to do. RARS does the reasoning. The agent is the security principal that RARS assumes when executing a specific matrix's operations. Different matrices have different agents, so operations from the HR matrix run under a different identity than operations from the finance matrix, even though RARS is the one agent doing the reasoning.

Requested vs Granted

The roles and IAM policies defined in a matrix spec are requested permissions. They declare what the matrix needs to operate. But the workspace administrator has final authority over what's actually granted.

When a matrix is installed, the admin reviews the requested permissions and decides what to allow. They can modify, restrict, or revoke permissions at any time after installation. A matrix can't grant itself unchecked access by declaring broad policies in its spec.

This is the same model as mobile app permissions. The app declares what it needs. You decide what to allow. The difference is that the workspace admin can change their mind after installation, tightening or loosening permissions as the operational relationship evolves.

The Dual-Policy Model

Access requires two separate checks to pass:

Identity policies (on roles): define what holders of a role can do. "Principals with the TaskManager role can invoke these specific actions."

Resource policies (on resources): define who can access a specific resource. "Only principals with the TaskManager role can invoke this action."

Both must allow for an operation to succeed. An identity policy granting broad action invocation isn't enough if the action's resource policy doesn't also permit the caller's role. This two-sided model lets you control permissions from the role side (what can this identity do?) and the resource side (who can access this resource?) independently.

The design decision here is deliberate. In most RBAC systems, permissions are one-sided: the role grants access, and that's it. The dual-policy model means a matrix author can restrict who calls their actions (via resource policies) even if the workspace admin has granted broad roles to callers. And the workspace admin can restrict what a matrix's agent can do (via identity policy modifications) even if the matrix spec requests broad access.

Why No Permission Inheritance

When Matrix A's agent invokes an action in Matrix B, the operation transitions to Matrix B's agent identity. Matrix A's permissions don't flow through. Matrix B's policies are evaluated independently against Matrix B's agent.

This is a deliberate choice. Permission inheritance across matrix boundaries would mean that installing a high-privilege matrix could escalate the privileges of every matrix it calls. Instead, each matrix is a security boundary. Cross-matrix operations require explicit permission grants on both sides.

The delegation chain is tracked for audit purposes (see Provenance and Observability), but it never influences authorization decisions. You can always see who called whom. But permissions are evaluated fresh at each boundary.

Escalation as Human-in-the-Loop

When any agent (RARS itself, a matrix agent, a sub-agent) lacks permission for an operation but the origin user (who started the conversation) has the required permission, the system can escalate. The operation is presented to the user with full context (the action, the subject, the payload) and the user decides whether to approve.

This is a global mechanism, not specific to RARS. RARS is the system agent, but it's also just a security principal at the end of the day. Every agent in the system goes through the same authorization and escalation path. If a matrix agent executing a workflow hits a permission boundary, the same escalation logic applies: check if the origin user has the permission, and if so, ask them.

This is a native authorization mechanism, not a workaround. It bridges the gap between what an agent can do automatically and what requires human judgment. The design intent is that most operations execute without escalation (the agent has the permissions it needs), but sensitive or unusual operations surface for human review regardless of which agent encounters the boundary.

Every escalation decision is recorded as a permanent audit record. Explicit Deny policies block escalation (deny always wins). Escalation doesn't increase anyone's permissions. It lets a human exercise permissions they already have in a specific situation.

Trust Policies

By default, only the matrix that defines an agent can operate under that agent's identity. If another matrix needs to assume your agent (for example, a reporting matrix that needs to read your data under your agent's permissions), it must be explicitly trusted through a trust policy.

This prevents a malicious matrix from claiming arbitrary agent identities. Trust is explicit, scoped, and revocable.

Summary

  • Process identities: agents are security principals that code runs as, not autonomous decision-makers
  • Requested vs granted: matrices declare needs, workspace admins control what's allowed
  • Dual-policy model: both identity policies and resource policies must allow for access to be granted
  • No permission inheritance: each matrix is a security boundary, permissions don't flow across
  • Escalation: native human-in-the-loop for operations that require human judgment
  • Explicit trust: matrices must be explicitly trusted to assume another matrix's agent

See Also

On this page