The official AI library for the BoxLang JVM dynamic language.
Unified, fluent APIs to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered apps.
Build powerful AI workflows with one API — no vendor lock-in, full RAG & multi-provider support.
One unified API for OpenAI, Claude, Gemini, Grok, Ollama, DeepSeek, Perplexity, and more. Switch providers with a single line.
// Default Provider
aiChat( msg )
// Specific Provider
aiChat(msg, {provider:"claude"})
// Chat Async Futures
aiChatAsync(msg, {provider:"grok"})
.then( r => println(r) )
.onError( e => println(e) )
.get()
Enterprise-grade memory isolation with userId and conversationId. 20+ memory types including vector search.
aiMemory(
type: "vector",
key: createUUID(),
userId: "123",
conversationId: "abc"
)
Build autonomous agents with memory, tools, sub-agents, and reasoning. Perfect for complex workflows and multi-step tasks.
aiAgent( name: "Research Assistant", instructions: "Help research AI trends", memory: [window, summary, chroma ], subAgents: [research,coder] ) .tools( [searchTool, dbTool] ) .run( "Search AI trends" )
Composable workflows with models, messages, transformers. Build reusable templates for any AI task.
aiMessage( "Explain AI in one sentence" )
.system( "You are a helpful assistant." )
.toDefaultModel()
.transform( r => r.content.uCase() )
.run()
Enable AI to call functions, access APIs, and interact with external systems in real-time with built-in tool support.
weatherTool = aiTool(
"get_weather",
"Get current weather for a location",
location => {
// Call your weather API
return getWeatherData( location )
}
)
Semantic search with 10+ vector databases. Build RAG systems with document loaders for 30+ file formats, with easy batching and auto-chunking.
aiDocuments( "/docs", {
type: "directory",
recursive: true,
extensions: ["md", "txt", "pdf"]
} ).toMemory(
memory = pinecone,
options = { chunkSize: 1000, overlap: 200 }
);
Real-time streaming through pipelines for responsive applications. Perfect for live chat interfaces.
aiMessage( "Write about ${topic}" )
.system( "You are ${style}" )
.toDefaultModel()
.stream(
( chunk ) => print( chunk.choices?.first()?.delta?.content ?: "" ),
// input bindings
{ style: "poetic", topic: "nature" }
)
Run models locally for privacy, offline use, and zero API costs. Full Ollama integration included.
// Star the Ollama server
docker compose docker-compose-ollama up -d
// Configure Boxlang AI
settings: {
provider: "ollama",
model: "llama3"
}
// Chat away
aiChat( "Hello from local AI!" )
Load PDFs, Word docs, CSVs, JSON, XML, Markdown, Web Scrapers, and 30+ formats. Perfect for RAG and document processing.
// Load a text file
docs = aiDocuments( "/path/to/document.txt" ).load()
// Load a directory of files
docs = aiDocuments( "/path/to/folder" ).load()
// Load from URL
docs = aiDocuments( "https://example.com/page.html" ).load()
// Load with auto-chunking
docs = aiDocuments( "/path/to/file.md" )
.chunkSize( 500 )
.overlap( 50 )
.load()
BoxLang AI exposes MCP Server capabilities to create AI-powered microservices in either HTTP or STDIN transports.
One easy endpoint by covention http://app/~bxai/mcp.bxm
MCPServer( "myApp" )
.setDescription( "My Application MCP Server" )
.registerTool(
aiTool( "search", "Search for documents", ( query ) => {
return searchService.search( query )
} )
)
.registerTool(
aiTool( "calculate", "Perform calculations", ( expression ) => {
return evaluate( expression )
} )
)
Call MCP Servers directly from BoxLang AI with built-in invokers. Simplify distributed AI workflows, create internal tools, and microservices.
// Create an MCP client
mcpClient = MCP( "http://localhost:3000" )
// Send a request to a tool
result = mcpClient.send( "searchDocs", {
query: "BoxLang syntax",
limit: 10
} )
// Check the response
if ( result.isSuccess() ) {
println( result.getData() )
} else {
println( "Error: " & result.getError() )
}
Extract type-safe, validated data from AI responses using classes, structs, or JSON schemas.
// With class
model = aiModel()
.structuredOutput( new Product() )
// With struct
model = aiModel()
.structuredOutput( {
name: "",
price: 0.0,
inStock: false
} )
// With array
model = aiModel()
.structuredOutput( [new Contact()] )
Switch between providers or use multiple-providers within the same AI Agent with zero code changes. You can also create your own custom providers easily by implementing the provider interface. Never be locked in again. Be fluid!
Full Stack AI: Combine vector search, multiple AI providers, and specialized agents in one workflow
User queries
Docs
Chunk
Vector DB
Search
Router
Writer
Coder
Analyst
Result
Powerful multi-memory architecture where each agent can have one or more memories attached to it.
Mix standard conversation memories with vector-based semantic search for hybrid intelligence.
Want to use another memory provider? No problem, build your
custom memory
or
custom vector memory
provider easily!
Multi-Tenant Ready: Built-in isolation with userId and conversationId support across all memory types
Autonomous agent with instructions
1 or more memory types per agent
Recent context + semantic search
Load content from 30+ file formats, databases, APIs, and web sources into vector databases for RAG.
Need a custom loader? Build your own by extending BaseDocumentLoader.
Automatic Processing: Load, chunk, embed, and store documents with a single command
PDFs, docs, web pages, databases
Split into optimal sized chunks
Ready for semantic search
This is what makes BoxLang AI so powerful. You can easily listen and interact with the entire AI workflows.
Hook into every step of the AI pipeline to add logging, monitoring, validation, or custom logic.
Complete Observability: Every interaction triggers events you can hook into
Makes AI requests
Intercepts & notifies
Log, validate, analyze
Expose tools as MCP Servers or consume external MCP services as MCP Clients.
Build microservices for AI agents with multi-tenant support and HTTP/STDIO transports.
Distributed AI: Connect agents with external tools and microservices via standardized protocol
Expose your tools & services
Use tools from servers
AI-powered experiences
Get started in minutes with simple examples. Click on our full documentation to dive deeper.
// Install via BoxLang Binary For OS Installation
install-bx-module bx-ai
// For Web Runtimes use CommandBox Installation
box install bx-ai
// boxlang.json
{
"modules": {
"bxai": {
"provider": "openai",
"apiKey": "sk-..."
}
}
}
// Basic chat
answer = aiChat( "Explain recursion" )
println( answer )
// With parameters
answer = aiChat(
"Write a haiku about coding",
{ temperature: 0.9, model: "gpt-4" }
)
// Get JSON automatically
user = aiChat(
"Create a user with name and email",
{ returnFormat: "json" }
)
println( user.name )
println( user.email )
// Real-time responses
aiChatStream(
"Tell me a story",
( chunk ) => {
content = chunk.choices
?.first()
?.delta
?.content ?: ""
print( content )
}
)
// Create callable functions
weather = aiTool(
name: "get_weather",
description: "Get weather",
callback: ( args ) => {
return { temp: 72 }
}
)
aiChat( "Weather in SF?", { tools: [weather] } )
// Build reusable workflows
pipeline = aiMessage()
.system( "You are helpful" )
.user( "Explain ${topic}" )
.toDefaultModel()
.transform( r => r.content )
result = pipeline.run( { topic: "AI" } )
// Autonomous agent
agent = aiAgent()
.name( "Assistant" )
.instructions( "Help research" )
.memory( aiMemory( type: "windowed" ) )
.tools( [searchTool] )
agent.chat( "Research AI trends" )
// Load documents for RAG
docs = aiDocuments( source: "docs/*.pdf" )
memory = aiMemory( type: "vector" )
memory.addDocuments( docs )
aiChat( "Summarize docs", { memory: memory } )
// Non-blocking requests
future = aiChatAsync( "Question 1" )
future2 = aiChatAsync( "Question 2" )
// Process results
future.then( r => println( r ) )
future2.then( r => println( r ) )
From simple chatbots to complex AI pipelines
Build conversational interfaces with memory and context awareness. Perfect for customer support and virtual assistants.
Generate, review, and explain code. Build AI-powered IDEs and development tools with real-time assistance.
Build knowledge bases that answer questions from your documents. Support 30+ file formats with vector search.
Create articles, documentation, marketing copy, and social media content. Automate content workflows.
Extract insights from text and structured data. Build AI-powered analytics and reporting tools.
Create autonomous agents that can research, analyze, and execute complex multi-step tasks.
Ortus Solutions offers professional services for multi-tenant AI platforms, RAG systems, and AI agent architectures. We built BoxLang AI — now we can help you build with it.
Everything you need to succeed with BoxLang AI
BoxLang AI+ includes additional providers, advanced memory systems, enhanced tooling, and priority support.