Blog

Luis Majano

April 02, 2026

Spread the word


Share your thoughts

It's been a while since we've shipped something this big. BoxLang AI 3.0 is a ground-up rethink of how AI agents, models, and tools work in the BoxLang ecosystem β€” and it lands with ten major features at once.

The headline is the AI Skills system: a first-class implementation of Anthropic's Agent Skills open standard that lets you define reusable knowledge blocks: coding styles, domain rules, tone policies, API guidelines once in a SKILL.md file and inject them into any number of agents and models at runtime. No more copy-pasting the same system-prompt boilerplate everywhere. Skills are versioned, composable, and come in two modes: always-on (full content in every call) and lazy (only a name + description until the LLM asks for more).

But that's just the start. Here's everything landing in 3.0:

  • 🎯 AI Skills System β€” composable, file-based knowledge blocks injected into any agent or model at runtime
  • πŸ”Œ MCP Server Seeding β€” agents auto-discover and register tools from any MCP server
  • πŸ—„οΈ Global AI Tool Registry β€” register tools by name once, reference them as strings anywhere
  • πŸ”§ Tool System Overhaul β€” new BaseTool / ClosureTool architecture plus two built-in core tools
  • πŸ›‘οΈ Provider Capability System β€” type-safe capability detection with clear UnsupportedCapability errors
  • 🌲 Parent-Child Agent Hierarchy β€” multi-agent orchestration trees with cycle detection and depth tracking
  • 🧡 Middleware Support β€” six built-in middleware classes for logging, retries, guardrails, and more
  • 🏒 Stateless Agents + Per-Call Identity Routing β€” safe multi-tenant memory across concurrent requests
  • πŸ€— HuggingFace Embeddings β€” new provider for the HuggingFace Inference API
  • πŸ”€ Custom Service URLs β€” proxy and self-hosted endpoint support across all providers

Read the full changelog here: https://ai.ortusbooks.com/readme/release-history/3.0.0

If you've been building AI-powered apps with BoxLang, this release changes everything. If you haven't started yet, this is the release that makes it worth it.

Let's dig in. πŸŽ‰


🎯 The Headline: AI Skills System

The single biggest addition in 3.0 is the AI Skills system β€” a first-class implementation of Anthropic's Agent Skills open standard.

Think of a skill as a portable, reusable unit of expertise: a SQL coding style guide, a tone-of-voice policy, domain-specific rules, API cheat sheets β€” anything your AI should know before it starts answering. Define it once in a SKILL.md file. Inject it into any number of agents and models at runtime. No more copy-pasting the same system-prompt boilerplate across every agent you build.

// Load a skill from a file
apiSkill = aiSkill( ".ai/skills/api-guidelines/SKILL.md" )

// Load every skill in a directory tree
allSkills = aiSkill( ".ai/skills/", recurse: true )

// Inline skill for short, self-contained guidance
sqlStyle = aiSkill(
    name        : "sql-style",
    description : "SQL coding standards",
    content     : "Always use snake_case. Prefer CTEs. Never use SELECT *."
)

Skills come in two injection modes:

  • Always-on β€” full content in every LLM call. Zero latency. Best for short, universally relevant rules like tone and formatting.
  • Lazy β€” only a name + one-line description goes into the system message. The LLM calls a built-in loadSkill( name ) tool to fetch the full content on demand. Perfect for large skill libraries where most skills are irrelevant to most queries β€” keeps your token usage low.

You can even promote a lazy skill to always-on mid-session:

// After the user mentions SQL work, pre-load it for the rest of the conversation
agent.activateSkill( "sql-style" )

And if you want certain skills available to every agent in your application without explicitly passing them:

// In Application.bx β€” every agent inherits these automatically
aiGlobalSkills().add( aiSkill( ".ai/skills/company-tone/SKILL.md" ) )
aiGlobalSkills().add( aiSkill( ".ai/skills/security-policy/SKILL.md" ) )

Skills live in plain Markdown files β€” which means your team can review them in pull requests, diff them, and keep them in sync with the rest of your codebase. This is the end of prompt drift.


πŸ“š Brand New Docs

The entire documentation has been re-organized so you can go from zero to hero. Tons of new sections and more direct docs for your reading pleasure: https://ai.ortusbooks.com/

πŸ”Œ MCP Server Seeding

Agents can now be pointed directly at one or more MCP servers. All tools exposed by those servers are discovered automatically via listTools() and registered as MCPTool instances β€” no manual tool construction required.

agent = aiAgent(
    name       : "data-analyst",
    mcpServers : [
        { url: "http://localhost:3001", token: "secret" },
        "http://internal-tools-server:3002"
    ]
)

Or fluently:

agent = aiAgent( "analyst" )
    .withMCPServer( "http://localhost:3001", { token: "secret" } )
    .withMCPServer( mcpClientInstance )

The agent's system prompt is automatically updated so the LLM knows which tools came from which server. MCP servers are also surfaced in getConfig() output for full observability.


πŸ—„οΈ Global AI Tool Registry

New in 3.0: a module-scoped Global Tool Registry accessible via the aiToolRegistry() BIF. Register tools by name once β€” in Application.bx or ModuleConfig.bx β€” and reference them as plain strings anywhere in your codebase.

// Register once
aiToolRegistry().register( "searchProducts", productSearchTool )
aiToolRegistry().register( "getWeather@myapp", weatherTool )

// Reference by name anywhere β€” no live object references needed
result = aiChat(
    "Find wireless headphones under $50",
    { tools: [ "searchProducts", "getWeather@myapp" ] }
)

Module namespacing (e.g. now@bxai) keeps registrations collision-free across modules. Two new interception points β€” onAIToolRegistryRegister and onAIToolRegistryUnregister β€” give you hooks for auditing and lifecycle management.


πŸ”§ Tool System Overhaul

The tool system has been significantly redesigned around a new BaseTool abstract base class. All tool implementations extend it, getting the shared invocation lifecycle, result serialization, and fluent describeArg() annotation syntax for free.

The old Tool.bx is replaced by ClosureTool β€” a BaseTool subclass backed by any closure or lambda that auto-introspects the callable's parameter metadata to generate an OpenAI-compatible function schema.

searchTool = aiTool(
    "searchKB",
    "Search the knowledge base",
    function( required string query, numeric maxResults = 5 ) {
        return knowledgeBase.search( query, maxResults )
    }
)

Two built-in core tools ship with the module:

  • now@bxai β€” auto-registered on module load, returns the current date/time in ISO 8601. Every agent gets temporal awareness for free, with no configuration.
  • httpGet β€” opt-in only (not auto-registered for security), fetches any URL via HTTP GET.

now@bxai being auto-registered is worth calling out. No major AI framework ships built-in tools out of the box. This is a genuine differentiator β€” your agents just know what time it is without any wiring on your part.


πŸ›‘οΈ Provider Capability System

A new type-safe capability system prevents calling unsupported operations on providers and gives you clear, actionable errors instead of cryptic runtime crashes.

service = aiService( "voyage" )
println( service.getCapabilities() )          // [ "embeddings" ]
println( service.hasCapability( "chat" ) )    // false

aiChat(), aiChatStream(), and aiEmbed() now check provider capabilities before calling and throw a clean UnsupportedCapability exception if the requirement isn't met. No more debugging mysterious provider errors.


🌲 Parent-Child Agent Hierarchy

Multi-agent orchestration is now a first-class concept. AiAgent tracks its position in an agent tree with full introspection, cycle detection, and depth tracking.

coordinator = aiAgent( name: "coordinator" )
    .addSubAgent( aiAgent( name: "researcher" ) )
    .addSubAgent( aiAgent( name: "writer" ) )

println( coordinator.isRootAgent() )         // true
println( researcherAgent.getAgentDepth() )   // 1
println( writerAgent.getAgentPath() )        // /coordinator/writer
println( researcherAgent.getAncestors() )    // [ coordinator ]

addSubAgent() automatically wires the parent relationship. getConfig() exposes parentAgent, agentDepth, and agentPath for full observability.


🧡 Middleware Support

Both AiModel and AiAgent now support composable middleware for cross-cutting concerns β€” logging, retries, guardrails, human-in-the-loop approvals, and more. Agent middleware is prepended ahead of model middleware in the execution chain.

3.0 ships six middleware classes out of the box:

MiddlewareWhat It Does
LoggingMiddlewareAudit every LLM call and tool invocation
RetryMiddlewareExponential back-off for rate limits and transient errors
GuardrailMiddlewareBlock dangerous tools and validate arguments with regex
MaxToolCallsMiddlewareCap tool invocations per run to prevent runaway agents
HumanInTheLoopMiddlewareRequire explicit human approval before sensitive tools execute
FlightRecorderMiddlewareRecord real runs to JSON fixtures, replay offline in CI

The FlightRecorderMiddleware deserves a special mention β€” it's a testing superpower. Record a live agent run once, commit the fixture, and replay it deterministically in CI with zero live provider calls.

// Record a real run
agent = aiAgent( "weather-bot", middleware: new FlightRecorderMiddleware( mode: "record" ) )
agent.run( "What is the weather in London?" )
// β†’ Writes: .ai/flight-recorder/weather-bot-<timestamp>.json

// Replay in CI β€” no live calls, fully deterministic
agent = aiAgent(
    "weather-bot",
    middleware: new FlightRecorderMiddleware(
        mode        : "replay",
        fixturePath : "tests/fixtures/weather-bot.json"
    )
)

🏒 Stateless Agents + Per-Call Identity Routing

AiAgent is now fully stateless. userId and conversationId are resolved per-call from the options argument, eliminating shared-state concurrency bugs in multi-user deployments.

Every memory type (IAiMemory, IVectorMemory) now accepts optional userId and conversationId on add(), getAll(), clear(), and related methods β€” so a single memory instance can safely serve multiple tenants:

sharedMemory = aiMemory( "cache" )

sharedMemory.add( message, userId: "alice", conversationId: "conv-1" )
sharedMemory.add( message, userId: "bob",   conversationId: "conv-2" )
sharedMemory.getAll( userId: "alice", conversationId: "conv-1" )

What Else Is New

  • πŸ€— HuggingFace Embeddings β€” new huggingface provider for the HuggingFace Inference API
  • πŸ”€ Custom Service URLs β€” all senders now accept a baseUrl override for proxies, self-hosted endpoints, and OpenAI-compatible APIs
  • πŸ—οΈ BaseService β†’ OpenAIService split β€” BaseService is now truly provider-agnostic, making custom provider implementations much cleaner
  • πŸ› Streaming event fixes β€” beforeAIModelInvoke/afterAIModelInvoke events were not firing for streaming; fixed
  • πŸ› MCP requestId null crash β€” fixed a crash on JSON-RPC notifications that intentionally omit id

No Breaking Changes

3.0 is a major release but your existing code keeps working. aiChat(), aiEmbed(), and aiAgent() BIF signatures are unchanged. Upgrade, run your tests, and start exploring the new APIs.


Get Started

# Install or upgrade your OS installation via BoxLang
install-bx-module bx-ai

# Install or upgrade via CommandBox
install bx-ai

πŸ“– Full Documentation πŸ› οΈ Changelog πŸ› Report Issues πŸ’¬ Community Slack

Add Your Comment

Recent Entries

Ortus & BoxLang March Recap 2026

Ortus & BoxLang March Recap 2026

March brought exciting momentum across the Ortus ecosystem, from new BoxLang tooling and cloud deployments to continued platform enhancements. We shared fresh insights on modernizing CFML systems, improving JVM performance, and integrating AI into development workflows.

Victor Campos
Victor Campos
March 31, 2026
Deploy BoxLang in Minutes: BoxLang Cloud Servers now available on AWS, Azure & Google Cloud Marketplaces

Deploy BoxLang in Minutes: BoxLang Cloud Servers now available on AWS, Azure & Google Cloud Marketplaces

BoxLang Cloud Servers are designed for modern cloud environments β€” and now you can deploy them in minutes directly from your preferred marketplace.


A Simpler Way to Run Modern Applications in the Cloud

Deploying and managing application runtimes in the cloud shouldn’t be complex.

Yet many teams still spend significant time on:

  • configuring environments
  • aligning infrastructure across teams
  • maintaining consistency b...

Cristobal Escobar
Cristobal Escobar
March 26, 2026
How to Modernize Legacy Applications with a Fixed Monthly Budget (Without a Big Rewrite)

How to Modernize Legacy Applications with a Fixed Monthly Budget (Without a Big Rewrite)

Many organizations know they need to modernize their legacy applications.

They see the risks.

They feel the operational pressure.

They understand the long-term impact.

But the initiative never starts.

Not because the problem is unclear.

Because the budget is.


The Real Problem Is Not Technology. It’s Budgeting.

In many organizations, especially in enterprise or public sector environments, modernization competes with multiple prio...

Cristobal Escobar
Cristobal Escobar
March 25, 2026