Blog

Luis Majano

April 20, 2026

Spread the word


Share your thoughts

BoxLang AI 3.1 is here, and it's a release that makes your agents smarter, faster, and more capable than ever. πŸŽ‰

While 3.0 rewrote the rules on multi-agent orchestration and skills, 3.1 fills in the gaps your production applications have been waiting for β€” full audio support, non-blocking async execution, concurrent multi-model pipelines, a secure filesystem tool suite, and the ElevenLabs voice provider. Plus 21 bug fixes that make the entire stack more reliable under production load.

Here's everything landing in 3.1:

  • 🎀 Audio Models β€” aiSpeak(), aiTranscribe(), and aiTranslate() for text-to-speech, speech-to-text, and audio translation
  • πŸ€– ElevenLabs Provider β€” high-quality multilingual neural voice synthesis
  • ⚑ Async Runnables β€” runAsync() on every IAiRunnable returning a BoxFuture
  • πŸ”€ aiParallel() BIF β€” fan-out a single input to multiple models concurrently, collect named results
  • πŸ—‚οΈ FileSystem Agent Tools β€” 19 opt-in, path-guarded tools for full filesystem operations
  • 🎀 Audio Agent Tools β€” speak@bxai, transcribe@bxai, translate@bxai auto-registered in the Global Tool Registry
  • 🧠 New Memory Events β€” onHybridMemoryAdd and onVectorSearch interception points
  • πŸ”Š Audio Module Configuration β€” centralized defaults for all audio operations
  • πŸ“‘ 6 New Audio Events β€” full before/after interception for speech, transcription, and translation

Let's dig in. πŸŽ‰


🎀 Audio BIFs β€” Your Agents Can Now Speak and Listen

3.1 brings full audio support to BoxLang AI through three new global Built-in Functions that follow the same familiar API you already know:

BIFDescription
aiSpeak( text, params, options )Convert text to natural speech audio
aiTranscribe( audio, params, options )Transcribe audio file/URL/binary to text
aiTranslate( audio, params, options )Translate any-language audio to English text
// Text-to-speech β€” save to file
audio = aiSpeak( "Welcome to BoxLang AI!" )
	.saveToFile( "/audio/welcome.mp3" )

// Speech-to-text β€” get plain text
text = aiTranscribe( "/recordings/meeting.mp3" )

// Audio translation β€” any language β†’ English
english = aiTranslate( "/recordings/french-meeting.mp3" )

aiSpeak() returns an AiSpeechResponse with everything you need: saveToFile(), getBase64(), toDataURI(), getMimeType(), getAudioFormat(), and getSize(). aiTranscribe() and aiTranslate() both return an AiTranscriptionResponse with getText(), getWordCount(), getFormattedDuration(), getSegments(), and word-level timestamp data via getWords().

Both response types extend AiBaseResponse β€” giving you model, provider, metadata, timestamp, and chainable setMetadataValue() out of the box. πŸ“¦


πŸ€– ElevenLabs β€” Neural Voice Synthesis

The new elevenlabs provider brings high-quality multilingual neural voice synthesis to your BoxLang applications. πŸŽ™οΈ

Default model: eleven_multilingual_v2 β€” plug in any voice ID from your ElevenLabs voice library via params.

audio = aiSpeak(
    "Bonjour, bienvenue dans BoxLang AI.",
    { voice_id: "21m00Tcm4TlvDq8ikWAM" },
    { provider: "elevenlabs" }
)
audio.saveToFile( "/audio/bonjour.mp3" )

Configure it once in config/boxlang.json and forget about it:

{
    "modules": {
        "bxai": {
            "settings": {
                "providers": {
                    "elevenlabs": {
                        "params": {
                            "model": "eleven_multilingual_v2"
                        }
                    }
                }
            }
        }
    }
}

Set ELEVENLABS_API_KEY in your .env β€” that's all the wiring you need. βœ…


⚑ Async Runnables β€” Non-Blocking AI Execution

Every IAiRunnable object β€” AiModel, AiAgent, AiRunnableSequence, AiTransformRunnable, and AiRunnableParallel β€” now exposes runAsync(). It dispatches to the io-tasks virtual thread pool and returns a BoxFuture you can chain, combine, or block on when you're ready. πŸš€

model = aiModel( "openai" )

// Fire and forget until you need it
future = model.runAsync( "Summarize this document..." )

// Block when you need the result
result = future.get()

// Or go fully reactive with a callback
future.then( result -> {
    println( "Done: #result#" )
} )

Where runAsync() really shines is running independent tasks simultaneously:

tasks = [
    aiModel( "openai" ).runAsync( "Summarize topic A" ),
    aiModel( "openai" ).runAsync( "Summarize topic B" ),
    aiModel( "openai" ).runAsync( "Summarize topic C" )
]

// All three in-flight at once β€” collect when ready
summaries = tasks.map( f => f.get() )

Three LLM calls. One wall-clock wait. No threading code. ⏱️


πŸ”€ aiParallel() β€” Run Multiple Models Concurrently

The new aiParallel() BIF creates an AiRunnableParallel that fans out a single input to multiple named runnables at the same time and collects results into a named struct. Compare models, aggregate perspectives, and build truly parallel AI pipelines. 🧠

parallel = aiParallel({
    openai:  aiModel( "openai",  { params: { model: "gpt-4o-mini" } } ),
    claude:  aiModel( "claude",  { params: { model: "claude-3-haiku-20240307" } } ),
    mistral: aiModel( "mistral", { params: { model: "mistral-small-latest" } } )
})

results = parallel.run( "What is the capital of France?" )

println( results.openai  )   // "Paris"
println( results.claude  )   // "Paris"
println( results.mistral )   // "The capital of France is Paris."

Because AiRunnableParallel implements IAiRunnable, it composes naturally into larger pipelines:

pipeline = aiParallel({ fast: aiModel( "groq" ), smart: aiModel( "openai" ) })
    .transform( results => "Fast: #results.fast#\nSmart: #results.smart#" )

combined = pipeline.run( "Explain quantum entanglement in one sentence" )
println( combined )

Fan out, collect, transform. Zero boilerplate. 🎯


πŸ—‚οΈ FileSystem Agent Tools β€” Power With a Security-First Design

3.1 ships FileSystemTools β€” a class with 19 @mcpTool-annotated methods covering the full filesystem lifecycle. Unlike audio tools, this one is not auto-registered by design. You opt in explicitly, and you supply the paths you want to allow. πŸ”’

// Register with explicit path restrictions
aiToolRegistry().scanClass(
    new FileSystemTools( allowedPaths: [ "/app/data", "/app/uploads" ] )
)

// Then assign specific tools to your agent
agent = aiAgent(
    name:    "file-manager",
    aiModel: "openai",
    tools:   [ "readFile@bxai", "writeFile@bxai", "listDirectory@bxai" ]
)

Every path argument is canonicalized and validated against your allowedPaths list at invocation time β€” directory-traversal attacks are blocked before any file operation runs. The full 19-tool suite covers read, write, append, edit, move, copy, delete, search, zip, unzip, and more.


🎀 Audio Agent Tools β€” Voice-Enabled Agents in One Line

Three audio tools are auto-registered in the Global Tool Registry at module startup:

Tool KeyDescription
speak@bxaiConvert text to speech; auto-generates a temp file if no outputFile is supplied
transcribe@bxaiTranscribe a local audio file or URL to plain text
translate@bxaiTranslate any-language audio to English text

Opt-in by listing them in your agent definition:

agent = aiAgent(
    name:    "voice-assistant",
    aiModel: "openai",
    tools:   [ "speak@bxai", "transcribe@bxai", "translate@bxai" ]
)

Your agents can now receive voice input, reason over it, and speak back β€” all with the same tool system you already know. πŸ”Š


🧠 New Memory Events

Two new interception points give you visibility into what your memory subsystem is doing at runtime:

EventWhen Fired
onHybridMemoryAddWhen a message is added to a HybridMemory instance
onVectorSearchWhen a semantic search runs against a vector memory
BoxRegisterInterceptor( "onVectorSearch", event => {
    println( "Vector search: query='#event.query#' found #event.results.len()# results" )
} )

Pair these with the existing event system to build observability pipelines, debug RAG relevance, or feed search telemetry into your monitoring stack. πŸ“Š


πŸ”Š Audio Module Configuration

A new audio section in module settings gives you global defaults for all audio operations β€” no more repeating params across every call:

{
    "modules": {
        "bxai": {
            "settings": {
                "audio": {
                    "defaultVoice":              "alloy",
                    "defaultOutputFormat":       "mp3",
                    "defaultSpeechModel":        "tts-1",
                    "defaultTranscriptionModel": "whisper-1"
                }
            }
        }
    }
}

Six new audio interception events β€” beforeAISpeech, afterAISpeech, beforeAITranscription, afterAITranscription, beforeAITranslation, afterAITranslation β€” round out the observability story for every audio operation. πŸ“‘


πŸ› 21 Bug Fixes Worth Calling Out

3.1 ships 21 bug fixes across the stack. The biggest ones:

  • Streaming onAITokenCount never fired β€” all streaming calls were invisible to usage tracking and billing interceptors. Fixed. πŸ’₯
  • AiModel.stream() missing middleware β€” streaming was skipping middleware injection. Fixed and now consistent with run().
  • Closure scoping in tool calls β€” ArgumentsScope resolution failures inside .each() / .map() closures across multiple providers. Fixed by capturing outer variables before closures.
  • onAITokenCount standardization β€” event data shape was inconsistent across providers; standardized across Bedrock, Claude, Cohere, and Gemini.
  • MCPServer.scan() / scanClass() β€” path-resolution edge cases across dot-notation, absolute, relative, and instance inputs. Now reliable.
  • aiAgent() single skill normalization β€” skills and availableSkills now accept a single instance or an array without wrapping in [].
  • WindowMemory.count() off-by-one β€” returned maxMessages + 1 instead of actual count. Fixed.
  • Gemini null chunk on final SSE β€” streaming responses occasionally emitted a null chunk on the final event. Now handled gracefully.
  • AWS Bedrock signature trailing slash β€” caused SignatureDoesNotMatch errors on certain endpoints. Fixed.
  • HybridMemory deduplication β€” combining window and vector search results could produce duplicate messages. Fixed.
  • aiTranscribe URL redirect β€” HTTP 301/302 from CDNs were not followed. Now follows up to 5 redirects.

See the full changelog for the complete list.


No Breaking Changes

3.0 code runs on 3.1 without modification. Upgrade, run your tests, and start exploring. βœ…


Get Started

# Install or upgrade via BoxLang
install-bx-module bx-ai

# Install or upgrade via CommandBox
install bx-ai

Professional Support

Remember that BoxLang and BoxLang AI are professional support software. We are there for you when you need it the most: https://www.boxlang.io/plans

πŸ“– Full Documentation πŸ› οΈ Changelog πŸ› Report Issues πŸ’¬ Community Slack πŸ’Ό BoxLang+ Plans

Add Your Comment

Recent Entries

ColdBox 8.1.0 Released β€” AI Routing, MCP, and BoxLang-First Power! πŸš€

ColdBox 8.1.0 Released β€” AI Routing, MCP, and BoxLang-First Power! πŸš€

We are thrilled to announce ColdBox 8.1.0, a targeted minor release packed with powerful new features, important improvements, and critical bug fixes across ColdBox, WireBox, and CacheBox. While minor in version number, this release delivers some truly exciting capabilities β€” especially for BoxLang developers building AI-powered applications.

Luis Majano
Luis Majano
April 14, 2026
ColdFusion Modernization for UK Universities Without Downtime

ColdFusion Modernization for UK Universities Without Downtime

Across the United Kingdom, many universities still rely on legacy ColdFusion and CFML systems to power student portals, enrollment platforms, research databases, payment gateways, and internal academic workflows.

These systems are often:

  • 15 to 25 years old
  • Mission-critical
  • Deeply integrated with student information systems
  • Running on older Adobe ColdFusion or Lucee versions
  • Tightly coupled monolithi...

Cristobal Escobar
Cristobal Escobar
April 13, 2026
BoxLang AI Series: Complete Guide to Building AI Agents

BoxLang AI Series: Complete Guide to Building AI Agents

The world of AI development is moving fast, but building real, production-ready AI agents doesn’t have to be complex.

This series walks you step by step through how to design, build, and deploy AI agents using BoxLang AI. Whether you’re exploring AI for the first time or looking to modernize your current applications, these guides will help you move from concept to implementation with clarity.


Start Here: A Practical Overview

If you’re new to BoxLang AI or want t...

Cristobal Escobar
Cristobal Escobar
April 13, 2026