Agentic Actions

Building actions that delegate to AI reasoning, human-in-the-loop workflows, and escalation patterns.

Agentic actions delegate work to RARS for AI-powered reasoning. Instead of following a fixed execution path, an agentic handler spawns a sub-agent that reasons iteratively: it discovers available tools, queries the graph, calls other actions, and works toward the result across multiple steps. The agent's output is still validated against the same I/O contract as any deterministic action. This guide covers when to use agentic handlers, how to configure them, how to design effective result schemas, and how to incorporate human judgment through human-in-the-loop patterns.

When to Use Agentic Handlers

Use an agentic handler when the path to the result requires reasoning, not just execution. The key question is: "Do I know the steps in advance?"

  • Steps known in advance: use a script handler. Deterministic, auditable, composable.
  • Steps depend on the situation: use an agentic handler. The agent figures out what to do based on context.

Common scenarios for agentic handlers:

  • Information synthesis: summarizing data from multiple sources where the right sources depend on what's found
  • Anomaly investigation: diagnosing an alert where the investigation path depends on findings
  • Content generation: producing structured content where quality depends on iterative refinement
  • Resource discovery: finding resources that match complex, fuzzy criteria across the graph
  • Risk assessment: evaluating risk where the relevant factors aren't known ahead of time

The rule of thumb: if you could write the logic as a fixed SPARQL script, do that. If you can't because the logic depends on intermediate results in ways you can't enumerate, use an agentic handler.

Configuring an Agentic Handler

An agentic handler tells RARS which model to use, how many reasoning iterations to allow, and optionally what initial context to provide.

Basic Configuration

tasks:TriageTaskHandler
    a rars-ai:AgenticHandler ;
    rars-ai:agentModel claude:Haiku ;
    rars-ai:maxIterations 15 ;
    rars-ai:taskPrompt tasks:TriageTaskPrompt .

tasks:TriageTask
    a rars-act:Action ;
    rars-act:isClassifiedAs rars-act:Mutation ;
    rdfs:label "triage task" ;
    dct:description "Analyzes a task's description, related tasks, and project context to assign priority, labels, and an initial assignee." ;
    rars-act:subjectScheme tasks:Task ;
    rars-act:subjectRequired true ;
    rars-act:resultScheme tasks:TriageResult ;
    rars-act:handler tasks:TriageTaskHandler .

Model Selection

Choose the model based on the complexity of reasoning required:

Haiku (claude:Haiku): fast, cost-effective. Use for straightforward tasks with clear criteria: resource lookup, simple classification, data extraction from structured sources. The platform's rars-os:Find action uses Haiku because search is largely pattern matching, not deep reasoning.

Sonnet (claude:Sonnet): balanced reasoning and cost. Use for tasks that require synthesis across multiple sources, multi-step planning, or nuanced judgment. The platform's rars-os:DoWork action uses Sonnet because general-purpose work requires deeper reasoning.

Opus (claude:Opus): deepest reasoning capability. Reserve for tasks where the quality of reasoning directly determines the value: risk assessment, complex content generation, nuanced analysis of ambiguous situations.

Don't default to the most capable model. A Haiku agent that runs in 2 seconds is better than an Opus agent that takes 15 seconds when the task doesn't require deep reasoning.

Iteration Limits

rars-ai:maxIterations controls how many reasoning loops the agent can perform. Each iteration is one LLM call that may invoke tools, query the graph, or call actions.

  • 5-10 iterations: simple lookup or classification tasks
  • 10-20 iterations: multi-step synthesis, investigation with branching paths
  • 20-30 iterations: complex workflows that involve multiple tool calls and intermediate reasoning

Set the limit based on what you've observed the task actually needs. Too low and the agent runs out of steps before completing. Too high and a confused agent wastes resources going in circles. Start with 15, observe, adjust.

Task Prompts

The rars-ai:taskPrompt points to an artifact resource containing the agent's system prompt. This prompt shapes the agent's behavior: what it should focus on, what tools are relevant, and what the expected output looks like.

tasks:TriageTaskPrompt
    a rars-build:Artifact ;
    rars-build:content """
    You are triaging a task. Your goal is to analyze the task's description,
    check for related tasks, review the project context, and produce a triage
    result with:
    - Priority (critical, high, medium, low)
    - Labels (up to 3, based on content)
    - Suggested assignee (based on team expertise and workload)

    Use the available actions to query task history and team information.
    """ .

Write prompts that are specific about the expected output structure. The agent needs to know what the result schema looks like so it can produce conforming output.

Initial Schemas

rars-ai:initialSchemas pre-loads class and property definitions into the agent's context, so it understands the domain types without needing to discover them:

tasks:TriageTaskHandler
    a rars-ai:AgenticHandler ;
    rars-ai:agentModel claude:Haiku ;
    rars-ai:maxIterations 15 ;
    rars-ai:taskPrompt tasks:TriageTaskPrompt ;
    rars-ai:initialSchemas (
        tasks:Task
        tasks:TriageResult
        tasks:TaskStatus
        tasks:Project
    ) .

This front-loads the agent with the ontology definitions for these classes (labels, descriptions, properties, constraints). Without initial schemas, the agent spends its first few iterations discovering what types exist. With them, it starts reasoning immediately.

Use initial schemas when the relevant types are known and stable. Skip them for exploratory tasks where the agent needs to discover what's relevant.

Initial Memory

rars-ai:initialMemory pre-loads data into the agent's working memory using a CONSTRUCT query:

tasks:InvestigateAnomalyHandler
    a rars-ai:AgenticHandler ;
    rars-ai:agentModel claude:Sonnet ;
    rars-ai:maxIterations 20 ;
    rars-ai:taskPrompt tasks:InvestigatePrompt ;
    rars-ai:initialMemory [
        rars-os:construct """
        PREFIX tasks: <https://example.org/spec/tasks#>
        PREFIX rars-act:   <https://poliglot.io/rars/spec/actions#>
        PREFIX rars-os:    <https://poliglot.io/rars/spec/os#>
        CONSTRUCT {
            ?task tasks:title ?title ;
                  tasks:status ?status ;
                  tasks:priority ?priority ;
                  tasks:description ?desc .
        } WHERE {
            ?_process rars-os:parent ?invocation .
            ?invocation rars-act:subject ?task .
            ?task tasks:title ?title ;
                  tasks:status ?status ;
                  tasks:priority ?priority .
            OPTIONAL { ?task tasks:description ?desc }
        }
        """
    ] .

This query runs before the agent starts reasoning and populates its context with the subject's key properties. The agent doesn't need to query for basic subject data; it's already in memory.

Result Schema Design

The result schema is what makes agentic actions trustworthy. No matter how the agent reasons, its output must conform to the result type's SHACL shape. This is the same contract enforcement that applies to deterministic actions.

Design for Validation

The tighter your result schema, the more RARS can validate. A result type of xsd:string tells RARS almost nothing. A typed result with specific required properties enables meaningful validation:

# Weak: RARS can only check that a string was returned
tasks:SummarizeTask
    rars-act:resultScheme xsd:string .

# Strong: RARS validates structure, types, and cardinality
tasks:TriageTask
    rars-act:resultScheme tasks:TriageResult .

tasks:TriageResultShape
    a sh:NodeShape ;
    sh:targetClass tasks:TriageResult ;
    sh:property [
        sh:path tasks:priority ;
        sh:in (tasks:Critical tasks:High tasks:Medium tasks:Low) ;
        sh:minCount 1 ;
        sh:maxCount 1 ;
        sh:message "Triage must assign exactly one priority level."
    ] ;
    sh:property [
        sh:path tasks:suggestedAssignee ;
        sh:class common:Employee ;
        sh:maxCount 1 ;
        sh:message "Suggested assignee must be a known employee."
    ] .

When the agent produces a result that doesn't have a valid priority, the constraint catches it. This is what makes agentic actions composable: downstream consumers can trust the result structure.

Handler Swappability

Because the I/O contract is decoupled from the handler, you can start with an agentic handler and replace it with a deterministic one later (or vice versa) without changing anything for callers.

A TriageTask action might start as an agentic handler while you're learning what good triage looks like. Once the patterns stabilize, you could replace it with a service integration that calls a trained classifier. The result schema stays the same. Callers are unaffected.

Design your result schemas with this in mind. Don't make them dependent on the handler type.

Human-in-the-Loop

Human-in-the-loop (HIL) actions pause execution and wait for a person to respond. They're used within script handlers to inject human judgment at specific points in a workflow.

Basic HIL Action

wo:RequestApproval
    a rars-act:Action ;
    rars-act:isClassifiedAs rars-act:Mutation ;
    rdfs:label "request approval" ;
    dct:description "Presents a work order and risk assessment to a manager for approval." ;
    rars-act:subjectScheme wo:WorkOrder ;
    rars-act:subjectRequired true ;
    rars-act:payloadScheme wo:ApprovalPayload ;
    rars-act:resultScheme wo:Approval ;
    rars-act:handler wo:RequestApprovalHandler .

wo:RequestApprovalHandler
    a rars-act:HumanInTheLoop .

When invoked from a script, execution suspends. The user sees the question (derived from rars-os:label on the payload) along with any choices. When they respond, execution resumes with the response bound to the result variable.

HIL Payload Configuration

The payload controls what the user sees:

# Free-text input
wo:ApprovalPayload
    a owl:Class ;
    rdfs:subClassOf rars-act:Payload .

wo:ApprovalPayloadShape
    a sh:NodeShape ;
    sh:targetClass wo:ApprovalPayload ;
    sh:property [
        sh:path rars-os:label ;
        sh:datatype xsd:string ;
        sh:minCount 1 ;
        sh:message "Approval request must have a label (the question to show the user)."
    ] ;
    sh:property [
        sh:path rars-os:choices ;
        sh:datatype xsd:string ;
        sh:message "Optional list of choices to present."
    ] .

Three input modes:

  • Free-text: no rars-os:choices provided. The user types a response.
  • Single-select: rars-os:choices provided, rars-os:allowMultiple false (default). The user picks one.
  • Multi-select: rars-os:choices provided, rars-os:allowMultiple true. The user picks one or more.

When to Use HIL

  • Approval gates: a workflow step that requires sign-off before proceeding
  • Disambiguation: RARS encounters ambiguity it can't resolve (multiple matching customers, unclear intent)
  • Sensitive operations: destructive actions where you want a human checkpoint
  • Quality review: AI-generated content that should be reviewed before publication

HIL actions are almost always called from within script handlers. The script defines when in the workflow human judgment is needed and what to do with the response.

Escalation

Escalation is a platform-level mechanism, not something you configure per action. When RARS attempts to invoke an action and authorization is denied for the calling agent, the platform checks whether the origin user (the human who started the conversation) has permission. If they do, RARS surfaces an escalation request: "Task Agent wants to update customer records. Allow?"

If the user approves, execution continues. If they deny, the action fails with an authorization error.

This means you don't need to build approval flows for permission boundaries. The IAM model handles it. Focus your HIL actions on business-level approvals (manager sign-off, content review) rather than permission grants.

Design Principles

Match the Handler to the Uncertainty

Don't use agentic handlers for tasks that are deterministic just because they're complex. A 10-step workflow with known steps is a script handler, even if it's long. An agentic handler is for when the agent needs to decide what to do next based on what it finds.

Constrain the Agent's Output, Not Its Path

Let the agent reason freely about how to achieve the goal, but be precise about what the result must look like. Tight result schemas with SHACL constraints are how you get both flexibility and reliability.

Invest in Initial Context

The biggest performance improvement for agentic handlers is reducing discovery time. Use rars-ai:initialSchemas to front-load type definitions and rars-ai:initialMemory to pre-load relevant data. An agent that starts with the right context reaches a result faster and with fewer iterations.

Keep Prompts Focused

A task prompt that tries to cover every edge case confuses the agent. Write prompts that focus on the goal and the expected output structure. Let the agent's access to the graph and available actions handle the edge cases naturally.

Start Agentic, Stabilize Deterministic

It's often faster to start with an agentic handler while you're learning what the right approach is. Once the patterns stabilize and you understand the steps, replace it with a script handler or service integration for predictability and performance. The I/O contract makes this swap transparent.

See Also

  • Service Integrations: action fundamentals, I/O contracts, subject dispatch
  • Script Handlers: deterministic multi-step workflows that compose with agentic actions
  • Security: IAM policies that govern what agents can do, and escalation
  • Contractual AI: the architectural principle behind validated agentic output

On this page