The official AI library for the BoxLang JVM dynamic language.
Unified, fluent APIs to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered apps.
Production-grade controls, reusable knowledge, and smarter agent orchestration.
Intercept, modify, and control every AI request with composable production-grade hooks. 6 built-in middlewares included.
Package and share reusable agent capabilities via aiSkill(). Always-on or lazy-loaded, with a global skill pool for cross-agent sharing.
Register tools once, resolve them anywhere by name or namespace. Supports module-scoped and lazy name resolution via aiToolRegistry().
Attach MCP servers directly to agents or models. Tools are auto-discovered and injected at runtime via withMCPServer().
Build parent-child agent trees with full path tracking, depth queries, and cycle detection guards. Structured multi-agent orchestration.
Agents can be fully stateless with per-call userId and conversationId. Safe for concurrent SaaS workloads without shared state conflicts.
Build powerful AI workflows with one API — no vendor lock-in, full RAG & multi-provider support.
One unified API for OpenAI, Claude, Gemini, Grok, Ollama, DeepSeek, Perplexity, Amazon Bedrock, HuggingFace, Docker AI Models, and more. Switch providers with a single line.
// Default Provider
aiChat( msg )
// Specific Provider
aiChat(msg, {provider:"claude"})
// Chat Async Futures
aiChatAsync(msg, {provider:"grok"})
.then( r => println(r) )
.onError( e => println(e) )
.get()
Enterprise-grade memory isolation with userId and conversationId. 20+ memory types including vector search.
Provider-agnostic request tagging with tenantId and usageMetadata for per-tenant billing and custom tracking.
// Multi-tenant memory
aiMemory(
type: "vector",
key: createUUID(),
userId: "123",
conversationId: "abc"
)
// Usage tracking
aiChat( msg, {
tenantId: "org-123",
usageMetadata: {
costCenter: "eng",
projectId: "proj-456",
userId: "user-789"
}
})
Build autonomous agents with memory, tools, sub-agents, and reasoning. Perfect for complex workflows and multi-step tasks.
aiAgent( name: "Research Assistant", instructions: "Help research AI trends", memory: [window, summary, chroma ], subAgents: [research,coder] ) .tools( [searchTool, dbTool] ) .run( "Search AI trends" )
Composable workflows with models, messages, transformers. Build reusable templates for any AI task.
aiMessage( "Explain AI in one sentence" )
.system( "You are a helpful assistant." )
.toDefaultModel()
.transform( r => r.content.uCase() )
.run()
Enable AI to call functions, access APIs, and interact with external systems in real-time with built-in tool support.
weatherTool = aiTool(
"get_weather",
"Get current weather for a location",
location => {
// Call your weather API
return getWeatherData( location )
}
)
Semantic search with 10+ vector databases. Build RAG systems with document loaders for 30+ file formats, with easy batching and auto-chunking.
aiDocuments( "/docs", {
type: "directory",
recursive: true,
extensions: ["md", "txt", "pdf"]
} ).toMemory(
memory = pinecone,
options = { chunkSize: 1000, overlap: 200 }
);
Real-time streaming through pipelines for responsive applications. Perfect for live chat interfaces.
aiMessage( "Write about ${topic}" )
.system( "You are ${style}" )
.toDefaultModel()
.stream(
( chunk ) => print( chunk.choices?.first()?.delta?.content ?: "" ),
// input bindings
{ style: "poetic", topic: "nature" }
)
Run models locally for privacy, offline use, and zero API costs. Full Ollama integration included.
// Star the Ollama server
docker compose docker-compose-ollama up -d
// Configure Boxlang AI
settings: {
provider: "ollama",
model: "llama3"
}
// Chat away
aiChat( "Hello from local AI!" )
Load PDFs, Word docs, CSVs, JSON, XML, Markdown, Web Scrapers, and 30+ formats. Perfect for RAG and document processing.
// Load a text file
docs = aiDocuments( "/path/to/document.txt" ).load()
// Load a directory of files
docs = aiDocuments( "/path/to/folder" ).load()
// Load from URL
docs = aiDocuments( "https://example.com/page.html" ).load()
// Load with auto-chunking
docs = aiDocuments( "/path/to/file.md" )
.chunkSize( 500 )
.overlap( 50 )
.load()
BoxLang AI exposes MCP Server capabilities to create AI-powered microservices in either HTTP or STDIN transports.
One easy endpoint by covention http://app/~bxai/mcp.bxm
MCPServer( "myApp" )
.setDescription( "My Application MCP Server" )
.registerTool(
aiTool( "search", "Search for documents", ( query ) => {
return searchService.search( query )
} )
)
.registerTool(
aiTool( "calculate", "Perform calculations", ( expression ) => {
return evaluate( expression )
} )
)
Call MCP Servers directly from BoxLang AI with built-in invokers. Simplify distributed AI workflows, create internal tools, and microservices.
// Create an MCP client
mcpClient = MCP( "http://localhost:3000" )
// Send a request to a tool
result = mcpClient.send( "searchDocs", {
query: "BoxLang syntax",
limit: 10
} )
// Check the response
if ( result.isSuccess() ) {
println( result.getData() )
} else {
println( "Error: " & result.getError() )
}
Extract type-safe, validated data from AI responses using classes, structs, or JSON schemas.
// With class
model = aiModel()
.structuredOutput( new Product() )
// With struct
model = aiModel()
.structuredOutput( {
name: "",
price: 0.0,
inStock: false
} )
// With array
model = aiModel()
.structuredOutput( [new Contact()] )
Production-grade request/response hooks. Stack logging, retry logic, guardrails, and human-in-the-loop controls — all composable and ordered.
agent = aiAgent(
name: "SupportBot",
middleware: [
aiMiddleware( "logging" ),
aiMiddleware( "retry", {
maxAttempts: 3
} ),
aiMiddleware( "guardrail", {
prompt: "No profanity"
} )
]
)
Package reusable agent capabilities as skills. Share instructions, tools, and memory across agents with always-on or lazy loading modes.
// Define a skill
coding = aiSkill( "coding", {
instructions: "You write clean code",
tools: [ codeRunnerTool ]
} )
// Attach to agent
agent = aiAgent(
name: "DevBot",
skills: [ coding, research ]
)
Register tools once, resolve them by name anywhere in your application. Namespace tools per-module and use lazy resolution for dynamic wiring.
// Register globally
registry = aiToolRegistry()
registry.register( searchTool )
registry.register( calcTool )
// Resolve by name anywhere
agent = aiAgent(
tools: [
"search", // lazy resolve
"calculator" // lazy resolve
]
)
Switch between providers or use multiple-providers within the same AI Agent with zero code changes. You can also create your own custom providers easily by implementing the provider interface. Never be locked in again. Be fluid!
Full Stack AI: Combine vector search, multiple AI providers, and specialized agents in one workflow
User queries
Docs
Chunk
Vector DB
Search
Router
Writer
Coder
Analyst
Result
Build intelligent agents that think, reason, and act.
More than simple chatbots—agents maintain memory, use tools, delegate to specialists, and orchestrate complex workflows autonomously.
One Agent, Unlimited Capabilities: Connect memories, tools, sub-agents, and AI providers in a single orchestration layer
Attach one or more memories to each agent. Mix conversation history with vector search for hybrid intelligence.
Agents automatically use tools to access APIs, databases, calculations, and external systems.
Delegate to specialized sub-agents for complex tasks. Parent agent automatically orchestrates delegation.
Agents work seamlessly in pipelines. Chain multiple agents with transformers for advanced workflows.
// Create tools
weatherTool = aiTool(
"get_weather",
"Get current weather",
location => getWeatherData( location )
)
// Create agent with memory and tools
agent = aiAgent(
name: "Assistant",
instructions: "Help users with queries",
memory: aiMemory( "simple" ),
tools: [ weatherTool ]
)
// Run - agent uses tools automatically
response = agent.run(
"What's the weather in Boston?"
)
println( response )
// Create specialized sub-agents
mathAgent = aiAgent(
name: "MathAgent",
instructions: "Expert in mathematics"
)
codeAgent = aiAgent(
name: "CodeAgent",
instructions: "Expert in programming"
)
// Parent agent delegates automatically
mainAgent = aiAgent(
name: "Orchestrator",
instructions: "Delegate to specialists",
subAgents: [ mathAgent, codeAgent ]
)
// Parent decides which sub-agent to use
response = mainAgent.run(
"Write code to calculate factorial"
)
// Create vector memory
vectorMemory = aiMemory( "chroma", {
collection: "docs",
embeddingProvider: "openai"
} )
// Load documents
aiDocuments( "/docs", {
type: "directory"
} ).toMemory( vectorMemory )
// Agent with multiple memories
agent = aiAgent(
name: "Knowledge Assistant",
instructions: "Answer using docs",
memory: [
aiMemory( "simple" ), // Chat
vectorMemory // RAG
]
)
// Searches docs + remembers conversation
response = agent.run(
"Explain authentication"
)
// Create agents
researcher = aiAgent(
name: "Researcher"
)
summarizer = aiAgent(
name: "Summarizer"
)
editor = aiAgent(
name: "Editor"
)
// Chain agents in pipeline
pipeline = aiMessage()
.user( "Research: ${topic}" )
.to( researcher )
.transform( r => "Summarize: " & r )
.to( summarizer )
.transform( r => "Polish: " & r )
.to( editor )
result = pipeline.run( {
topic: "Quantum Computing"
} )
Agents automatically handle message history, context, and state across interactions.
Agents decide when and how to use tools, query memory, or delegate to sub-agents.
Store data in multiple memory systems simultaneously for hybrid intelligence.
Stream agent responses in real-time for responsive chat interfaces.
Intercept agent lifecycle events for logging, monitoring, and custom workflows.
Track usage per tenant with built-in tenantId and usageMetadata support.
Build parent-child agent trees with getAgentPath(), getAgentDepth(), and cycle detection guards.
Attach MCP servers directly to agents with withMCPServer(). Tools are auto-discovered and injected at runtime.
Reusable, composable capabilities you attach to agents. Package instructions, tools, memory, and context once — share them everywhere.
Skills loaded at agent startup — always available for every request, no activation required.
coding = aiSkill( "coding", {
instructions: "You are a clean code expert",
tools: [ codeRunnerTool, lintTool ],
alwaysOn: true
} )
agent = aiAgent(
name: "DevBot",
skills: [ coding ]
)
// Skill is always active
agent.run( "Review this function" )
Skills loaded on-demand — the agent activates them only when the task requires them, saving tokens and context.
research = aiSkill( "research", {
instructions: "Research and cite sources",
tools: [ webSearchTool, pdfTool ],
alwaysOn: false // lazy load
} )
writing = aiSkill( "writing", {
instructions: "Write in markdown, be concise",
alwaysOn: false
} )
agent = aiAgent(
name: "Orchestrator",
skills: [ research, writing ]
)
Register skills in a global pool and share them across all agents. Define once, use everywhere.
// Register globally (e.g. in Application.bx)
aiSkillPool().register( coding )
aiSkillPool().register( research )
// Any agent can resolve by name
agent = aiAgent(
name: "FullStack",
skills: [ "coding", "research" ]
)
Define skills in a portable SKILL.md file — a standard, shareable format for packaging AI capabilities.
--- name: coding description: Clean code expert alwaysOn: true --- # Instructions You are an expert software engineer. Write clean, tested, documented code. ## Constraints - Use descriptive variable names - Add comments for complex logic - Always suggest edge case handling
Production-grade, composable controls that intercept every AI request and response. Stack them. Order them. Own your AI in production.
Automatically log every request and response for audit, debugging, and compliance.
Automatically retry failed requests with configurable backoff. Handles transient provider errors gracefully.
Enforce content policies and safety rules. Block inputs and sanitize outputs before they reach users.
Limit the number of tool invocations per request. Prevent runaway tool loops in autonomous agents.
Pause agent execution and request human approval before critical actions are taken.
Capture a complete trace of every interaction — inputs, outputs, tool calls, and timing for replay and debugging.
agent = aiAgent(
name: "ProductionBot",
instructions: "You are a customer support agent",
memory: aiMemory( "cache" ),
tools: [ searchTool, ticketTool ],
middleware: [
aiMiddleware( "logging", {
logLevel: "info",
logRequests: true,
logResponses: true
} ),
aiMiddleware( "retry", {
maxAttempts: 3,
backoffMs: 1000
} ),
aiMiddleware( "guardrail", {
systemPrompt: "Never discuss competitor pricing or internal processes"
} ),
aiMiddleware( "maxToolCalls", {
max: 5
} ),
aiMiddleware( "flightRecorder" )
]
)
response = agent.run( "Help me track my order #12345" )
Powerful multi-memory architecture where each agent can have one or more memories attached to it.
Mix standard conversation memories with vector-based semantic search for hybrid intelligence.
Want to use another memory provider? No problem, build your
custom memory
or
custom vector memory
provider easily!
Multi-Tenant Ready: Built-in isolation with userId and conversationId support across all memory types
Autonomous agent with instructions
1 or more memory types per agent
Recent context + semantic search
Load content from 30+ file formats, databases, APIs, and web sources into vector databases for RAG.
Need a custom loader? Build your own by extending BaseDocumentLoader.
Automatic Processing: Load, chunk, embed, and store documents with a single command
PDFs, docs, web pages, databases
Split into optimal sized chunks
Ready for semantic search
This is what makes BoxLang AI so powerful. You can easily listen and interact with the entire AI workflows.
Hook into every step of the AI pipeline to add logging, monitoring, validation, or custom logic.
Complete Observability: Every interaction triggers events you can hook into
Makes AI requests
Intercepts & notifies
Log, validate, analyze
Expose tools as MCP Servers or consume external MCP services as MCP Clients.
Build microservices for AI agents with multi-tenant support and HTTP/STDIO transports.
Distributed AI: Connect agents with external tools and microservices via standardized protocol
Expose your tools & services
Use tools from servers
AI-powered experiences
Get started in minutes with simple examples. Click on our full documentation to dive deeper.
// Install via BoxLang Binary For OS Installation
install-bx-module bx-ai
// For Web Runtimes use CommandBox Installation
box install bx-ai
// boxlang.json
{
"modules": {
"bxai": {
"provider": "openai",
"apiKey": "sk-..."
}
}
}
// Basic chat
answer = aiChat( "Explain recursion" )
println( answer )
// With parameters
answer = aiChat(
"Write a haiku about coding",
{ temperature: 0.9, model: "gpt-4" }
)
// Get JSON automatically
user = aiChat(
"Create a user with name and email",
{ returnFormat: "json" }
)
println( user.name )
println( user.email )
// Real-time responses
aiChatStream(
"Tell me a story",
( chunk ) => {
content = chunk.choices
?.first()
?.delta
?.content ?: ""
print( content )
}
)
// Create callable functions
weather = aiTool(
name: "get_weather",
description: "Get weather",
callback: ( args ) => {
return { temp: 72 }
}
)
aiChat( "Weather in SF?", { tools: [weather] } )
// Build reusable workflows
pipeline = aiMessage()
.system( "You are helpful" )
.user( "Explain ${topic}" )
.toDefaultModel()
.transform( r => r.content )
result = pipeline.run( { topic: "AI" } )
// Autonomous agent
agent = aiAgent()
.name( "Assistant" )
.instructions( "Help research" )
.memory( aiMemory( type: "windowed" ) )
.tools( [searchTool] )
agent.chat( "Research AI trends" )
// Load documents for RAG
docs = aiDocuments( source: "docs/*.pdf" )
memory = aiMemory( type: "vector" )
memory.addDocuments( docs )
aiChat( "Summarize docs", { memory: memory } )
// Non-blocking requests
future = aiChatAsync( "Question 1" )
future2 = aiChatAsync( "Question 2" )
// Process results
future.then( r => println( r ) )
future2.then( r => println( r ) )
From simple chatbots to complex AI pipelines
Build conversational interfaces with memory and context awareness. Perfect for customer support and virtual assistants.
Generate, review, and explain code. Build AI-powered IDEs and development tools with real-time assistance.
Build knowledge bases that answer questions from your documents. Support 30+ file formats with vector search.
Create articles, documentation, marketing copy, and social media content. Automate content workflows.
Extract insights from text and structured data. Build AI-powered analytics and reporting tools.
Create autonomous agents that can research, analyze, and execute complex multi-step tasks.
Ortus Solutions offers professional services for multi-tenant AI platforms, RAG systems, and AI agent architectures. We built BoxLang AI — now we can help you build with it.
Everything you need to succeed with BoxLang AI
Comprehensive guides, API reference, and tutorials
Source code, examples, and issue tracking
Join our Slack channel and forums
Learn about the BoxLang language
BoxLang AI+ includes additional providers, advanced memory systems, enhanced tooling, and priority support.
Get insider news and releases, personalized just for you