ArchitectureBusiness Runtime

The NeuroSymbolic Engine

A custom SPARQL engine for declarative, procedural workflows backed by a semantic knowledge graph.

Two Systems, One Runtime

RARS is built on the principle that AI reasoning and deterministic execution need to be the same system, not two systems glued together. The Semantic Operating System explains why everything is data. This page explains how that data is executed.

The engine has two integrated components:

The probabilistic layer (LLM reasoning): understands natural language, makes judgment calls, plans multi-step approaches, adapts when things don't go as expected. This is where intelligence lives.

The symbolic layer (SPARQL execution): runs declarative workflows against the knowledge graph, dispatches actions based on type hierarchies, enforces validation, and tracks provenance. This is where determinism lives.

These aren't separate services that communicate through APIs. They're interleaved within the same execution loop. RARS reasons, writes a SPARQL script, the engine executes it, the results feed back into RARS's context, and it reasons again. The knowledge graph is the shared state that both layers operate on.

Why NeuroSymbolic?

There's a debate in the AI world between three camps: neural architecture purists who believe scaling LLMs is the only path forward, NeuroSymbolic enthusiasts who want to augment neural systems with symbolic reasoning, and symbolic AI proponents who argue that neural approaches are fundamentally limited. Each camp has strong claims. Our position is pragmatic, and it doesn't fully align with any of them.

We don't believe symbolic systems make AI reasoning smarter. LLMs are remarkably good at understanding intent, making judgment calls, and handling ambiguity. The neural architecture handles the intelligence. That's not the problem we're solving.

The problem is that it is fundamentally impossible for a purely neural system to manipulate systems of record and the physical world. At some point, it needs to execute something to actually do something. It needs to call an API, update a record, approve a request, orchestrate a multi-step workflow. Today's approach is the ReAct pattern: reason about what to do, execute one step, observe the result, reason again, execute the next step. Every step is a fresh inference. Every step can drift from the original plan.

We're not trying to make the reasoning layer smarter with symbolic AI. We're making the execution plane smarter. The symbolic runtime provides capabilities that neural architectures fundamentally can't:

  • Deterministic plan execution: a SPARQL script runs against the graph with formal semantics. The plan executes as written, not re-inferred at each step. Multi-step workflows with interdependent actions execute predictably.
  • Type-based dispatch: class hierarchies drive action resolution automatically, the same way method dispatch works in OOP. The runtime routes to the right implementation for the right type without the AI needing to figure out which handler to call.
  • Continuous validation: SHACL shapes verify outputs against business rules at every step, without the AI needing to remember and check every constraint.
  • Inference: RDFS entailment derives facts from the ontology structure, reducing what the AI needs to explicitly reason about.
  • Full introspection: the plan, the state, and the execution trace are all readable data that the AI can inspect at any point.

This is what we call Object-Oriented AI: the AI operates through a concept-oriented programming runtime where domain objects have types, behaviors, and inheritance hierarchies. And it produces a ProActive architecture that replaces the step-by-step ReAct loop with an approach where the AI plans a complete workflow and the symbolic engine executes it with deterministic guarantees. The AI isn't reacting to each step. It's planning ahead and letting a smart execution layer carry out the plan.

The neural architecture handles the intelligence. The symbolic runtime handles the execution. Together, they're more capable than either could be alone.

The Execution Loop

Every interaction follows the same pattern:

  1. RARS reads the current state (active matrices, semantic memory, working memory) as structured documentation
  2. The LLM reasons about what to do and produces SPARQL
  3. The engine executes the SPARQL against the knowledge graph
  4. Results (execution traces, materialized data, validation findings) feed back into RARS's context
  5. RARS reasons about the results and decides what to do next
  6. This continues until the work is complete

Within a single SPARQL script, the engine can traverse the graph, invoke actions that call external services, delegate to sub-agents for non-deterministic reasoning, and write observations back to the graph. The script is a complete workflow definition that blends symbolic and neural operations.

Beyond Standard SPARQL

Standard SPARQL is a query language. You ask questions about a graph and get answers. It doesn't do anything to the outside world.

RARS's SPARQL engine extends the language into a procedural, declarative DSL through property functions. These are operations embedded within SPARQL queries that execute as the query evaluates:

  • Action invocations: call external APIs through service integrations, with request construction and response mapping handled transparently
  • Sub-agent delegation: spawn a reasoning agent to handle a complex sub-task, with the result flowing back as a query binding
  • Human-in-the-loop: pause execution and wait for human input before continuing
  • Graph mutations: write observations back to the graph with full provenance tracking

Because these operations are embedded in SPARQL, they compose naturally. A single CONSTRUCT query can read current state, call an API, delegate reasoning to a sub-agent, and write the combined results to the graph. The execution is deterministic (the script runs as written), but individual steps within it can be non-deterministic (an AI sub-agent reasons to produce a result).

# A single SPARQL script that orchestrates a complete workflow
CONSTRUCT {
    ?workOrder  wo:status      ?status ;
                wo:priority    ?priority ;
                wo:approvedBy  ?approver .
}
WHERE {
    # Read current state from the graph
    ?workOrder wo:GetWorkOrder (
        wo:workOrderId "WO-2024-0891"
    ) .

    # Call an AI sub-agent to assess risk
    ?assessment wo:AssessRisk (?workOrder) .
    ?assessment wo:priority ?priority .

    # Pause for human approval
    ?approval wo:RequestApproval (
        ?workOrder
        wo:assessment ?assessment
    ) .
    ?approval wo:approvedBy ?approver .

    # Mutate the external system
    ?dispatch wo:DispatchWorkOrder (
        ?workOrder
        wo:approval ?approval
        wo:priority ?priority
    ) .
    ?workOrder wo:status ?status .
}

Type-Based Dispatch

The engine uses the ontology's class hierarchy for action dispatch. When RARS invokes an action on a subject, the engine examines the subject's type and walks the class hierarchy to find the most specific handler. This is Object-Oriented AI in action: the runtime selects the right implementation for the right type, just like method dispatch in OOP.

This means the same action name can produce different behavior depending on the subject. rars-ai:ChatComplete dispatches to the Claude handler when the subject is a Claude model and would dispatch to a different handler for a different provider. RARS doesn't need to know about these distinctions. It calls the generic action, and the engine dispatches based on the type.

Continuous Validation

Every mutation to the graph triggers SHACL validation. When RARS writes data (through action results, direct insertions, or sub-agent outputs), the engine validates the changes against the shapes from all activated matrices. Violations surface as errors, warnings, or info-level findings.

This is the "compiler diagnostics" model. RARS can inspect validation findings to understand what's wrong, trigger governance workflows, or self-correct. The validation doesn't just catch bugs. It provides a continuous audit of data quality against your defined business rules.

Inference

The engine includes an RDFS reasoner that runs against the combined ontology of all activated matrices. This means facts you didn't explicitly state are derived automatically:

  • Every ElectricalWorkOrder is a WorkOrder (via rdfs:subClassOf)
  • Every property with a declared rdfs:domain tells the engine what type its subject is
  • Every rdfs:subPropertyOf relationship applies to instances of the sub-property

This reduces what RARS needs to explicitly reason about and what you need to explicitly model. The ontology does work at the structural level so the AI can focus on higher-level reasoning.

Summary

  • Two layers, one runtime: LLM reasoning and symbolic execution interleave in the same execution loop
  • SPARQL as a procedural DSL: property functions extend the DSL into a workflow execution engine
  • Type-based dispatch: the class hierarchy drives action resolution, just like method dispatch in OOP
  • Continuous validation: every mutation is checked against SHACL shapes from all activated matrices
  • Inference: RDFS reasoning derives facts automatically, reducing the burden on both the AI and your models

See Also

On this page