BoxLang AI 3.1 is here, and it's a release that makes your agents smarter, faster, and more capable than ever. π
BoxLang AI v3.1 Released - Audio, Async, Parallel Pipelines, and More π€β‘π
BoxLang AI 3.1 is here, and it's a release that makes your agents smarter, faster, and more capable than ever. π
We are thrilled to announce ColdBox 8.1.0, a targeted minor release packed with powerful new features, important improvements, and critical bug fixes across ColdBox, WireBox, and CacheBox. While minor in version number, this release delivers some truly exciting capabilities β especially for BoxLang developers building AI-powered applications.
We just shipped the BoxLang Google Cloud Functions Runtime β and it brings the same write-once-run-anywhere serverless experience you already know from our AWS Lambda runtime, now running natively on Google Cloud Functions Gen2.
We believe the best way to learn a programming language is by writing code β real code, with real feedback, and real tests. That's exactly why we built BoxLings.
BoxLang 1.12.0 marks a meaningful turning point. After establishing a rock-solid foundation across runtime, compiler, CFML compatibility, and the module ecosystem, BoxLang has entered its innovation cycle. The language is mature, battle-tested, and production-deployed across the industry.
AI agents are transforming how we build software. Unlike traditional chatbots that just answer questions, agents can reason about what tools they need, decide when to use them, chain multiple actions together, and remember what happened earlier in a conversation.
The AI ecosystem has a tool problem. Every framework has its own way of defining tools, every agent has its own way of calling them, and every integration requires custom code on both sides. An agent built in Python can't easily use tools built in Java. An MCP server written for Claude Desktop can't easily be consumed by a BoxLang agent without a custom adapter.
A chatbot with no memory isn't a conversation β it's a series of isolated queries. Every message starts from scratch. The user has to re-explain who they are, what they're working on, and what was just said. It's exhausting, and it signals that the AI isn't really listening.
Vendor lock-in is the silent killer of AI projects. You pick OpenAI, build everything against the OpenAI API, and then GPT-5 launches at three times the price. Or a competitor launches a model that's faster for your use case. Or you need to self-host for compliance. Or your client is on AWS and wants Bedrock.
Agents make live LLM calls. They invoke real tools. They have non-deterministic outputs. Standard unit testing approaches fall apart. You can't mock every provider. You can't replay a conversation from three weeks ago. You can't confidently tell stakeholders that the agent you deployed today behaves the same way it did when you signed off on it.