ArchitectureA supervised JVM-class runtime — OLTP on seven engines, OLAP on three. AI-native, MCP-native, observable as plain SQL.Read the architecture
Platform · AI & MCP

AI inside the perimeter, MCP as a first-class surface.

Multi-provider AI through a unified abstraction — swap OpenAI, Anthropic, Ollama, Google Vertex, IBM Watson or Cohere by configuration, not by code. Built-in RAG over the customer's choice of vector store — Qdrant, Milvus, PostgreSQL pgvector or Redis. Natural-language agents that generate XDBL (the platform's XSD-described query grammar), not raw SQL — the compiler emits the engine-native SQL with row and column security injected automatically. The agent cannot escape the user's permission perimeter; it is a structural property, not a policy. A native MCP server so external assistants can drive the platform without bypassing security. Cost-tracked, audit-logged, role-aware by construction.

A four-station compilation flow: AI agent → XDBL → Compiler → Native SQL. The Compiler, the focal point of the diagram, accepts XDBL and emits engine-native SQL with row-and-column security injected from the User context (role, tenant, perimeter). Miniature database cylinders trail off to the right as the destination. The agent cannot bypass the compiler — it is the platform's, not the agent's.

AI inside the permission perimeter — by construction

Agents generate XDBL (the platform's XSD-described query grammar), not raw SQL. The compiler emits the engine-native SQL with row and column security injected per the user's runtime context. The AI cannot escape the perimeter because the AI does not produce the executed query; the compiler does.

Multi-provider by construction

OpenAI, Anthropic, Ollama, Google Vertex AI, IBM Watson, Cohere — abstracted behind a provider-independent interface. Swap providers per configuration; production code does not change.

MCP as a first-class surface

A native Model Context Protocol server. External AI assistants connect under user identity, see only tools the user can use, and operate under the same audit trail.

Cost-tracked and audit-logged

Every AI request is recorded — provider, model, tokens, cost, execution time, status. Per-tenant spend caps, per-user budgets, model allowlists. AI cost is governed, not opaque.

Multi-provider AI

One abstraction; six providers; the customer's choice of model per workload.

OpenAI

GPT-4o, GPT-4, GPT-3.5-turbo. The default for high-quality general-purpose generation when the operational context permits a public API call.

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku. Strong on instruction-following, code review and document analysis.

Ollama (local)

Open-source models running on customer infrastructure. Zero-marginal cost, full data residency, no public-API dependency. Good for classification, summarisation and routing tasks.

Google Vertex AI

Gemini family models, integrated for customers with Google Cloud workloads or Google-specific compliance requirements.

IBM Watson

WatsonX API integration for enterprises with established IBM relationships or watsonx-specific data residency.

Cohere

Generate and Embed APIs for customers preferring Cohere's models — particularly for European data-residency configurations.

RAG, agents and operational AI

The patterns enterprise customers actually need — not toy demonstrations.

RAG over multiple vector stores

Vector embeddings stored in the customer's choice of vector store — Qdrant, Milvus, PostgreSQL pgvector or Redis. Maximum Marginal Relevance retrieval out of the box. Document ingestion, chunking, embedding, retrieval and citation all under the same security perimeter the customer's data lives in. The same multi-engine philosophy that applies to relational databases applies to vector stores.

Natural-language agents — XDBL, not raw SQL

Ask a business question in plain language; the agent generates XDBL — the platform's XSD-described XML query grammar — not raw SQL; the compiler emits the engine-native SQL with row-level security expressions and column-level visibility rules injected based on the user's runtime context. The agent works one level above SQL; the platform handles the engine and the security. The agent cannot generate a query the user could not write themselves — and it cannot bypass that constraint because the constraint is enforced during compilation, not during agent execution.

Operational explanations

AI agents that explain operational records — what changed, who changed it, why the rule fired. The audit trail and the AI explanation share the same data.

Document drafting from operational context

Draft customer communications, regulatory submissions and operational summaries from the same data the rest of the platform reads. Output stays inside the perimeter; review is human-in-the-loop by default.

The architectural reason AI cannot escape the perimeter

Most AI-on-data systems rely on the LLM behaving well. This one does not — it relies on a compilation step the LLM cannot bypass.

The agent generates XDBL, not SQL

The platform's XSD-described XML grammar (XDBL) is what the agent emits. The XSD is small, complete and well-documented; an LLM can target it with high accuracy. SQL — with its seven engine dialects, each with its own date functions, string handling and sequence semantics — is never the agent's output.

The compiler emits the native SQL

The query compiler translates XDBL to the engine's native SQL. Date arithmetic, string functions, isolation levels, sequence handling — all resolved automatically against the engine currently bound to the user's request context.

The compiler injects the security

At the same compilation step, the row-level security expressions for the current user's roles, departments and ownership rules are injected into the WHERE clause. Column-level visibility rules are applied to the SELECT list. The generated SQL is the SQL the user is entitled to run — no more.

The agent cannot bypass the compiler

The agent does not connect to the database. The agent emits XDBL; the platform's runtime calls the compiler; the compiler returns engine-native SQL with the security layer already applied; the runtime executes that SQL. The compilation step is between the agent and the data — and it is the platform's, not the agent's.

Model Context Protocol — native

MCP is an open standard for AI assistants to drive systems. We are MCP-native, not MCP-wrapped.

Native MCP server

A platform endpoint that speaks MCP Streamable HTTP. External AI assistants — Claude, GPT, custom — connect under the user's identity and operate inside that user's perimeter.

Personal tokens, explicit grants

MCP access requires a personal API token (the user generates one from their preferences) plus an explicit MCP access type grant (admin gate). No way to bypass.

Role-filtered tool visibility

The set of tools the MCP server advertises is filtered by the user's role. Developers see the platform tools; standard users see the application tools; nobody sees what they cannot use.

Permission inheritance

An AI assistant can do exactly what the user can do — no more and no less. Audit log records the AI as the operator; security model treats it the same as any other client.

Cost governance

AI cost is the new shadow IT line. The platform brings it under operational control.

Provider and model registry

Central registration of AI providers and models, with per-model pricing (request and response tokens). Adding a new model is a configuration step; the cost layer reflects it automatically.

Per-tenant and per-user defaults

Default models configured per company and per user. Standard users get the company default; admin roles can opt into more capable models when the workload warrants it.

Spend caps and quotas

Monthly spend caps per company, monthly request quotas per user, max tokens per request, allow-listed models. Costs that would exceed the cap are rejected, not just reported.

Usage logging and reporting

Every AI request logged with provider, model, tokens consumed, cost, execution time, user and outcome. AI cost reporting is a query against the platform's standard data, not a separate vendor dashboard.

Why AI on the platform is different

Most enterprise AI-on-data systems sit on top of a SQL engine and rely on the LLM to write SQL that respects the user's permissions. The LLM is asked to be careful. The application layer is asked to filter what the LLM sees. Neither is a structural guarantee; both depend on getting every prompt and every filter right, every time. When the LLM hallucinates a column or routes around an ownership rule, there is nothing below it that catches the mistake.

On Airtool the AI does not write SQL. It writes XDBL, the platform's XSD-described XML query grammar. The platform's compiler translates XDBL to native SQL for the engine bound to the user's request context — and at the same step, injects the row-level security expressions and column-level visibility rules that apply to the user's roles, departments and ownership. The agent works at one level above SQL; the platform handles the engine and the security. The agent cannot escape the perimeter because the agent does not produce the executed query — the compiler does, and the compiler is the platform's.

This is not a policy that can be relaxed. It is the architecture. External assistants connecting through MCP inherit the user's permissions exactly because they cannot do otherwise. The AI surface and the data surface share one compilation step; the security model is enforced inside that step; there is nowhere else for the AI to put the query.

The structural advantage exists only because Airtool's applications are metadata, not files. Forms, screens, endpoints, roles, scheduled jobs, stored procedures and audit trails are database records in the Dictionary — not scattered across Vue components, config files and Java classes. The compiler owns every path to the data because there is no parallel path. Agents and developers write to the same dictionary ; the platform's governance applies equally to both. This is the difference between AI bolted onto a codebase and AI native to a runtime.

Talk to an AI architect.

A scoping conversation about providers, RAG, agents, MCP and cost governance. Discovery call within 48 hours.