What the engineering team is shipping and thinking about.
Explanatory writing from the team building Airtool — product releases, architecture notes, opinions.
Per-row CRC staleness detection — silent data drift ruled out across cursor types
Cursor subscriptions now compute a per-row checksum at selection time and validate it on every fetch. Form sub-cursors that fall out of sync trigger HTTP 409 Conflict with machine-readable error details, prompting the client to refresh rather than silently binding the wrong record. The integrity guarantee that previously covered TABLE cursors now extends to VTABLE and REPORT cursors — the read path is uniformly safe.
See the feature →Jetty acceptor and selector pools sized to the CPU — HTTP/2 latency tails eliminated
Acceptor and selector thread defaults moved from hardcoded 1/1 to CPU-aware auto-sizing via Jetty's standard heuristics. On a typical 8-vCPU production node this yields 1 acceptor plus 4 selectors, eliminating the HTTP/2 selector-saturation pattern that surfaced on high-concurrency installations. System property overrides remain available for installations that need explicit tuning.
See the feature →OpenAI 3072-dimension embeddings supported, with cross-engine HNSW benchmarks
The vector tier's maximum dimension is raised to support OpenAI text-embedding-3-large at 3072 dimensions. Cross-database benchmarks now ship comparing Informix and PostgreSQL HNSW performance at one thousand, five thousand and ten thousand rows. Both engines pass the same correctness suite; the numbers inform an operational engine choice rather than a functional one.
See the feature →HNSW vector indexing on Informix — semantic search at OLTP scale
The data tier gains a Hierarchical Navigable Small World (HNSW) access method on Informix, bringing approximate-nearest-neighbour vector search to one of the seven supported OLTP engines. Dual-backend storage (file or database BLOB) and pgvector-compatible function names. The platform's RAG and embedding-driven workloads now run on the same operational engine the rest of the application uses — one engine, both transactional and vector workloads.
See the feature →Google Vertex AI embeddings join the multi-provider AI surface
Vertex AI embedding models are now addressable from the platform's AI configuration layer alongside OpenAI and Cohere. Global model configuration supports endpoint override for on-premises Vertex deployments. Organisations standardising on Google Cloud infrastructure can use the same client code as those running OpenAI or Cohere — provider choice is configuration, not a code change.
See the feature →Python runs alongside JavaScript on the server-side scripting surface
Server-side scripts can now be authored in Python via GraalVM's polyglot Python engine, dispatched either by a script-type attribute or by a body annotation. The Python runtime shares the platform's standard library — Ax.db, Ax.http, the security perimeter — with JavaScript. The same business logic is now expressible in two languages; the team's hiring pool widens accordingly.
See the feature →AI agents gain structured outputs and parallel tool execution
The AI agent layer now enforces response schemas via OpenAI Structured Outputs in strict mode, guaranteeing that LLM responses parse reliably into typed values and eliminating prompt-injection risk from malformed JSON. The same release enables multi-tool parallel execution — agents that need to invoke several tools simultaneously do so concurrently on virtual threads, reducing round-trip latency in multi-step agentic workflows.
See the feature →S/MIME decryption supports full MIME message format
The cryptography layer's S/MIME decryption now parses full MIME messages and extracts inner body content from decrypted payloads. Interoperability with external S/MIME senders improves; multi-part signed-and-encrypted email workflows round-trip cleanly. The credential-handling and certificate-chain surfaces remain unchanged.
See the feature →Mermaid microservice — UML, flowcharts and sequence diagrams as a platform primitive
A new microservice in the eighteen-service mesh converts Mermaid DSL to production-grade SVG over HTTP and gRPC — flowcharts, sequence diagrams, state machines and ERDs without a third-party plugin. Server-side rendering reduces client-side overhead and enables platform-managed diagram caching. UML collaboration diagrams now ship native to the platform.
See the feature →Streaming XSLT — heap footprint cut for large document workflows
The XSLT processor now operates in streaming mode, eliminating the need to buffer entire document trees in memory. XML-to-JSON conversion jobs on documents in the tens or hundreds of megabytes complete inside a small heap rather than scaling with document size. Operational teams can right-size container memory limits on high-volume document platforms without inflating headroom for the worst case.
See the feature →OCR microservice ships with dual engines and automatic fallback
The OCR microservice now supports both Tesseract 5.8 LSTM and an alternate engine, with automatic fallback when the primary path returns an unsatisfactory result. Integration tests confirm output parity. Enterprises with legacy OCR pipelines can re-point production traffic without service interruption, and accuracy is the property of the service, not of any single engine.
See the feature →Agent API exposes explicit cancel — governance controls for autonomous workflows
AI agents now support explicit cancel() shortcuts at the API level, enabling applications to enforce timeout policies and resource budgets on long-running operations. Combined with the MCP server's role-aware tool access, architects gain fine-grained control over autonomous agent behaviour in production — the agent can be paused, cancelled or rate-limited at any boundary.
See the feature →PDF reports honour display-order column rearrangement without re-querying
The PDF rendering pipeline now respects user-defined column display order — column widths, header names and body rows all reorder visually without materialising a new ResultSet. Physical column order remains the fallback when no reordering is set. Analysts customise the pixel-perfect output without forcing a re-query of the underlying data.
See the feature →JDK 25 adoption and Argon2id password hashing strengthen the security baseline
The runtime is now on Java 25 — the latest LTS — with native Argon2id password hashing in the cryptography layer, meeting OWASP 2023 password-storage requirements. ProGuard configuration is updated to preserve bytecode optimisation at JDK 25 compatibility. The credential-compromise surface across authentication subsystems is reduced; the platform is on a foundation supported through the next decade.
See the feature →Microservices authentication framework — pluggable enforcement per service
The gRPC microservice mesh now supports pluggable authentication with optional per-service enforcement, enabling staged rollouts and mixed-mode deployments. Organisations enforce service-level authentication policies without redeploying the runtime, addressing compliance requirements in federated architectures where some services must be locked down before others.
See the feature →Database Workbench gains SSH tunnel parameters for cloud database connectivity
Database connections from Database Workbench now accept SSH tunnel configuration parameters, enabling connectivity through jump hosts in cloud and network-restricted environments. Tunnel metadata is captured against the server node in the browser IDE; the connection lifecycle handles the tunnel transparently. Cloud-database access from the browser IDE no longer requires a parallel terminal session for the operator.
See the feature →Monaco editor gains TypeScript-aware Ax API autocompletion
The Monaco code editor inside the platform's form surface now ships with type definitions for the Ax standard library, enabling IntelliSense, inline parameter hints and refactoring support across embedded JavaScript snippets. Configurable TypeScript diagnostics work inside script tags without blocking the form-authoring flow. Client-side script authoring and backend data binding meet under the same editor experience.
See the feature →Real-time AI chat over Server-Sent Events — streaming, cancellable, connection-leased
Streaming chat now delivers tokens progressively over SSE rather than waiting for the full response. Connection-lease management prevents exhaustion under concurrency, and long-running operations can be cancelled mid-stream. The knowledge-aware assistant mode integrates document context into multi-turn conversations, making context-grounded reasoning a first-class chat interaction.
See the feature →Unified tool-result contract for AI agent outputs
The AI agent's tool-result handling consolidates table results, chart visualisations and file attachments under a single contract — replacing the previous fragmentation between SQL result-set and chart-specific handlers. Agent responses now render through one canonical pipeline; the UI component library treats every tool output the same way regardless of source.
See the feature →JMX-based observability for the gRPC microservice fleet
The microservice runtime now exposes interceptor metrics and manifest metadata through JMX, surfacing operational state to Prometheus, Grafana and the customer's existing JVM monitoring stack. Server-status endpoints become Kubernetes-ready health checks. Performance instrumentation is non-intrusive and routes through the standard surfaces enterprise operators already monitor.
See the feature →Microsoft Outlook integration via the modern Graph SDK
The microservice mesh adds first-class Outlook support — email send, thread-based message retrieval, attachment handling — through Microsoft Graph with OAuth 2.0 and MSAL bearer-token flows. Outlook joins Google Gmail and Google Calendar as a peer in the unified cloud-connector protocol. The legacy authentication path is retired.
See the feature →Google Calendar event creation with Hangouts Meet conferencing
Calendar event creation through the platform's cloud-connector mesh now supports conference URIs and Hangouts Meet integration, with attendee management and pagination tokens. The refactored event model separates field mapping from JSON serialisation, decoupling the integration from API-version drift. Calendar-driven scheduling joins email and document workflows as native platform capabilities.
See the feature →Anthropic Claude 3.7 Sonnet joins the multi-provider AI surface
Anthropic's Claude 3.7 Sonnet — including extended-thinking and reasoning-budget support — is now addressable through the platform's unified LLM interface, alongside OpenAI and Google Vertex. Message-content handling is refactored to be model-agnostic, supporting per-provider features without leaking them into application code. Provider choice is configuration; capability remains consistent.
See the feature →gRPC consolidation of the document-processing services
PDF, XSLT and HTML rendering services are unified under a single gRPC architecture with shared error-handling semantics. IP location service and self-signed-certificate support join the cluster's secure inter-service communication path. This consolidation is the foundation later 2025 and 2026 microservice additions build on — one mesh, one wire protocol, one operational surface.
See the feature →Studio UI ships Chinese and Urdu language packs — seven languages plus RTL
Chinese and Urdu translations join English, Spanish, Catalan, French, German, Italian and Portuguese in the platform's UI surface. RTL framework support remains in place for Arabic and Hebrew when those packs ship. Multinational deployments across Asia and South Asia now stand up on a single platform without a separate localisation track.
See the feature →Multi-provider credential management for cloud integrations
A new credentials-manager SPI abstracts cloud credential storage across S3, Azure Blob, IBM Cloud and Google Cloud Storage. Enterprise deployments integrate the corporate secret-management policy of choice without rewriting application code, and credential rotation moves out of the application and into the platform — where audit and rotation cadence belong.
See the feature →Analytical Card v2 — horizontal-layout dashboard primitive
A new horizontal-layout analytical card joins the form-editor component palette, with flexible node positioning, focus layering and configurable legends. Composable, data-responsive designs replace static card layouts. Architects building executive dashboards build narrative-driven insights without per-screen custom styling.
See the feature →