Professional Open Source

One API All AI Providers.

The official AI library for the BoxLang JVM dynamic language.
Unified, fluent APIs to orchestrate multi-model workflows, autonomous agents, RAG pipelines, and AI-powered apps.

Your App One line of code: aiChat( message ) BoxLang AI Unified Intelligence Hub 🔌 12+ AI Providers 🧠 RAG + Vector Search 🤖 Agents + Tools AI Providers OpenAI Claude Gemini Grok DeepSeek Ollama + 6 more ✨ One API → Unlimited AI Power Switch providers, combine models, orchestrate agents—all with simple, fluent code
12+ AI Providers
10+ Vector DBs
30+ File Formats
20+ Memory Types
MCP Servers
MCP Invokers
Multi-Tenant
22+ Events

Why BoxLang Logo AI?

Build powerful AI workflows with one API — no vendor lock-in, full RAG & multi-provider support.

12+ AI Providers

One unified API for OpenAI, Claude, Gemini, Grok, Ollama, DeepSeek, Perplexity, and more. Switch providers with a single line.

// Default Provider
aiChat( msg )

// Specific Provider
aiChat(msg, {provider:"claude"})

// Chat Async Futures
aiChatAsync(msg, {provider:"grok"})
  .then( r => println(r) )
  .onError( e => println(e) )
  .get()
                    

Multi-Tenant Memory

Enterprise-grade memory isolation with userId and conversationId. 20+ memory types including vector search.

aiMemory(
    type: "vector",
    key: createUUID(),
    userId: "123",
    conversationId: "abc"
)
                    

AI Agents

Build autonomous agents with memory, tools, sub-agents, and reasoning. Perfect for complex workflows and multi-step tasks.

aiAgent(
  name: "Research Assistant",
  instructions: "Help research AI trends",
  memory: [window, summary, chroma ],
  subAgents: [research,coder]
)
.tools( [searchTool, dbTool] )
.run( "Search AI trends" )

AI Pipelines

Composable workflows with models, messages, transformers. Build reusable templates for any AI task.

aiMessage( "Explain AI in one sentence" )
    .system( "You are a helpful assistant." )
    .toDefaultModel()
    .transform( r => r.content.uCase() )
    .run()

Real-Time Tools

Enable AI to call functions, access APIs, and interact with external systems in real-time with built-in tool support.

weatherTool = aiTool(
  "get_weather",
  "Get current weather for a location",
  location => {
    // Call your weather API
    return getWeatherData( location )
  }
)

Vector Memory & RAG

Semantic search with 10+ vector databases. Build RAG systems with document loaders for 30+ file formats, with easy batching and auto-chunking.

aiDocuments( "/docs", {
  type: "directory",
  recursive: true,
  extensions: ["md", "txt", "pdf"]
} ).toMemory(
    memory  = pinecone,
    options = { chunkSize: 1000, overlap: 200 }
);

Streaming Responses

Real-time streaming through pipelines for responsive applications. Perfect for live chat interfaces.

aiMessage( "Write about ${topic}" )
  .system( "You are ${style}" )
  .toDefaultModel()
  .stream(
    ( chunk ) => print( chunk.choices?.first()?.delta?.content ?: "" ),
    // input bindings
    { style: "poetic", topic: "nature" }
)
                    

Local AI with Ollama

Run models locally for privacy, offline use, and zero API costs. Full Ollama integration included.

// Star the Ollama server
docker compose docker-compose-ollama up -d

// Configure Boxlang AI
settings: {
  provider: "ollama",
  model: "llama3"
}

// Chat away
aiChat( "Hello from local AI!" )

Document Loaders

Load PDFs, Word docs, CSVs, JSON, XML, Markdown, Web Scrapers, and 30+ formats. Perfect for RAG and document processing.

// Load a text file
docs = aiDocuments( "/path/to/document.txt" ).load()

// Load a directory of files
docs = aiDocuments( "/path/to/folder" ).load()

// Load from URL
docs = aiDocuments( "https://example.com/page.html" ).load()

// Load with auto-chunking
docs = aiDocuments( "/path/to/file.md" )
    .chunkSize( 500 )
    .overlap( 50 )
    .load()

MCP Server

BoxLang AI exposes MCP Server capabilities to create AI-powered microservices in either HTTP or STDIN transports. One easy endpoint by covention http://app/~bxai/mcp.bxm

MCPServer( "myApp" )
  .setDescription( "My Application MCP Server" )
  .registerTool(
    aiTool( "search", "Search for documents", ( query ) => {
     return searchService.search( query )
    } )
  )
  .registerTool(
    aiTool( "calculate", "Perform calculations", ( expression ) => {
     return evaluate( expression )
    } )
  )

MCP Invokers

Call MCP Servers directly from BoxLang AI with built-in invokers. Simplify distributed AI workflows, create internal tools, and microservices.

// Create an MCP client
mcpClient = MCP( "http://localhost:3000" )

// Send a request to a tool
result = mcpClient.send( "searchDocs", {
  query: "BoxLang syntax",
  limit: 10
} )

// Check the response
if ( result.isSuccess() ) {
    println( result.getData() )
} else {
    println( "Error: " & result.getError() )
}

Structured Output

Extract type-safe, validated data from AI responses using classes, structs, or JSON schemas.

// With class
model = aiModel()
  .structuredOutput( new Product() )

// With struct
model = aiModel()
  .structuredOutput( {
    name: "",
    price: 0.0,
    inStock: false
} )

// With array
model = aiModel()
  .structuredOutput( [new Contact()] )

Supported Providers

Switch between providers or use multiple-providers within the same AI Agent with zero code changes. You can also create your own custom providers easily by implementing the provider interface. Never be locked in again. Be fluid!

Multi-Model Orchestration with RAG

Full Stack AI: Combine vector search, multiple AI providers, and specialized agents in one workflow

Your App

User queries

BoxLang AI
Unified API RAG Pipeline Agent Router
AI Providers
GPT-4 Claude Gemini DeepSeek Llama + 7 more
RAG Pipeline

Docs

Chunk

Vector DB

Search

Agent Orchestration

Router

Writer

Coder

Analyst

Result

One API, Unlimited Possibilities: Mix and match models per task, combine semantic search with any provider, orchestrate complex multi-agent workflows

Supported Memories

Powerful multi-memory architecture where each agent can have one or more memories attached to it.
Mix standard conversation memories with vector-based semantic search for hybrid intelligence.
Want to use another memory provider? No problem, build your custom memory or custom vector memory provider easily!

Standard Memories

Windowed
Summary
Session
File
Cache
JDBC

Vector Memories

BoxVector
Hybrid
ChromaDB
PostgreSQL
Pinecone
Qdrant
Weaviate
Milvus

Multi-Memory Architecture

Multi-Tenant Ready: Built-in isolation with userId and conversationId support across all memory types

AI Agent

Autonomous agent with instructions

Multiple Memories

1 or more memory types per agent

Hybrid Intelligence

Recent context + semantic search

Supported Document Loaders

Load content from 30+ file formats, databases, APIs, and web sources into vector databases for RAG.
Need a custom loader? Build your own by extending BaseDocumentLoader.

File Formats

Text
Markdown
PDF
CSV
JSON
XML
Logs

Web Sources

HTTP/HTTPS
Web Crawler
RSS/Atom

System Sources

Directory
SQL

RAG Pipeline with Document Loaders

Automatic Processing: Load, chunk, embed, and store documents with a single command

Source Files

PDFs, docs, web pages, databases

Auto-Chunking

Split into optimal sized chunks

Vector Store

Ready for semantic search

BoxLang AI Events

This is what makes BoxLang AI so powerful. You can easily listen and interact with the entire AI workflows.
Hook into every step of the AI pipeline to add logging, monitoring, validation, or custom logic.

Request/Response Lifecycle

onAIMessageCreate
onAIRequestCreate
onAIProviderRequest
onAIProviderCreate
onAIModelCreate
onAIRequest
onAIResponse
onAIError
onAIRateLimitHit
onAITokenCount

Pipeline & Model Execution

onAITransformCreate
beforeAIPipelineRun
afterAIPipelineRun
beforeAIModelInvoke
afterAIModelInvoke
onAIToolCreate
beforeAIToolExecute
afterAIToolExecute

MCP Server Events

onMCPServerCreate
onMCPServerRemove
onMCPRequest
onMCPResponse
onMCPError

Event-Driven AI Architecture

Complete Observability: Every interaction triggers events you can hook into

Your Application

Makes AI requests

Event System

Intercepts & notifies

Your Listeners

Log, validate, analyze

Model Context Protocol

Expose tools as MCP Servers or consume external MCP services as MCP Clients.
Build microservices for AI agents with multi-tenant support and HTTP/STDIO transports.

MCP Server

Expose Tools
Multi-Tenant
Multi-Server
HTTP Transport
STDIO Transport
Auth & CORS

MCP Client (Invokers)

Consume Services
Use as AI Tools
Agent Integration
Chat Integration
Discovery
Response Handling

MCP Integration Architecture

Distributed AI: Connect agents with external tools and microservices via standardized protocol

MCP Servers

Expose your tools & services

AI Agents

Use tools from servers

Applications

AI-powered experiences

Quick Start

Get started in minutes with simple examples. Click on our full documentation to dive deeper.

Installation

// Install via BoxLang Binary For OS Installation
install-bx-module bx-ai

// For Web Runtimes use CommandBox Installation
box install bx-ai

Configuration

// boxlang.json
{
  "modules": {
    "bxai": {
      "provider": "openai",
      "apiKey": "sk-..."
    }
  }
}

Simple Chat

// Basic chat
answer = aiChat( "Explain recursion" )
println( answer )

// With parameters
answer = aiChat(
    "Write a haiku about coding",
    { temperature: 0.9, model: "gpt-4" }
)

Structured Output

// Get JSON automatically
user = aiChat(
    "Create a user with name and email",
    { returnFormat: "json" }
)

println( user.name )
println( user.email )

Streaming

// Real-time responses
aiChatStream(
    "Tell me a story",
    ( chunk ) => {
        content = chunk.choices
            ?.first()
            ?.delta
            ?.content ?: ""
        print( content )
    }
)

AI Tools

// Create callable functions
weather = aiTool(
    name: "get_weather",
    description: "Get weather",
    callback: ( args ) => {
        return { temp: 72 }
    }
)

aiChat( "Weather in SF?", { tools: [weather] } )

Pipelines

// Build reusable workflows
pipeline = aiMessage()
    .system( "You are helpful" )
    .user( "Explain ${topic}" )
    .toDefaultModel()
    .transform( r => r.content )

result = pipeline.run( { topic: "AI" } )

AI Agents

// Autonomous agent
agent = aiAgent()
    .name( "Assistant" )
    .instructions( "Help research" )
    .memory( aiMemory( type: "windowed" ) )
    .tools( [searchTool] )

agent.chat( "Research AI trends" )

Document Processing

// Load documents for RAG
docs = aiDocuments( source: "docs/*.pdf" )

memory = aiMemory( type: "vector" )
memory.addDocuments( docs )

aiChat( "Summarize docs", { memory: memory } )

Async Operations

// Non-blocking requests
future = aiChatAsync( "Question 1" )
future2 = aiChatAsync( "Question 2" )

// Process results
future.then( r => println( r ) )
future2.then( r => println( r ) )

Built For Real-World Use Cases

From simple chatbots to complex AI pipelines

Chatbots & Assistants

Build conversational interfaces with memory and context awareness. Perfect for customer support and virtual assistants.

Code Generation

Generate, review, and explain code. Build AI-powered IDEs and development tools with real-time assistance.

RAG & Q&A Systems

Build knowledge bases that answer questions from your documents. Support 30+ file formats with vector search.

Content Generation

Create articles, documentation, marketing copy, and social media content. Automate content workflows.

Data Analysis

Extract insights from text and structured data. Build AI-powered analytics and reporting tools.

AI Agents & Workflows

Create autonomous agents that can research, analyze, and execute complex multi-step tasks.

Need Enterprise AI Implementations?

Ortus Solutions

Ortus Solutions offers professional services for multi-tenant AI platforms, RAG systems, and AI agent architectures. We built BoxLang AI — now we can help you build with it.

GSA Schedule Holder
20 Years Experience
250+ Projects

Resources

Everything you need to succeed with BoxLang AI

Documentation

Comprehensive guides, API reference, and tutorials

Read Docs

GitHub

Source code, examples, and issue tracking

View Repo

Community

Join our Slack channel and forums

Join Slack

BoxLang

Learn about the BoxLang language

Learn More

Want More Features?

BoxLang AI+ includes additional providers, advanced memory systems, enhanced tooling, and priority support.