LlamaIndexTS v0.3.0
What's new in LlamaIndexTS v0.3.0
Agents
In this release, we've not only ported the Agent module from the LlamaIndex Python version but have significantly enhanced it to be more powerful and user-friendly for JavaScript/TypeScript applications.
Starting from v0.3.0, we are introducing multiple agents specifically designed for RAG applications, including:
OpenAIAgent
AnthropicAgent
ReActAgent
:
import { OpenAIAgent } from "llamaindex";
import { tools } from "./tools";
const agent = new OpenAIAgent({
tools: [...tools],
});
const { response } = await agent.chat({
message: "What is weather today?",
stream: false,
});
console.log(response.message.content);
We are also introducing the abstract AgentRunner class, which allows you to create your own agent by simply implementing the task handler.
import { AgentRunner, OpenAI } from "llamaindex";
class MyLLM extends OpenAI {}
export class MyAgentWorker extends AgentWorker<MyLLM> {
taskHandler = MyAgent.taskHandler;
}
export class MyAgent extends AgentRunner<MyLLM> {
constructor(params: Params) {
super({
llm: params.llm,
chatHistory: params.chatHistory ?? [],
systemPrompt: params.systemPrompt ?? null,
runner: new MyAgentWorker(),
tools:
"tools" in params
? params.tools
: params.toolRetriever.retrieve.bind(params.toolRetriever),
});
}
// create store is a function to create a store for each task, by default it only includes `messages` and `toolOutputs`
createStore = AgentRunner.defaultCreateStore;
static taskHandler: TaskHandler<Anthropic> = async (step, enqueueOutput) => {
const { llm, stream } = step.context;
// initialize the input
const response = await llm.chat({
stream,
messages: step.context.store.messages,
});
// store the response for next task step
step.context.store.messages = [
...step.context.store.messages,
response.message,
];
// your logic here to decide whether to continue the task
const shouldContinue = Math.random(); /* <-- replace with your logic here */
enqueueOutput({
taskStep: step,
output: response,
isLast: !shouldContinue,
});
if (shouldContinue) {
const content = await someHeavyFunctionCall();
// if you want to continue the task, you can insert your new context for the next task step
step.context.store.messages = [
...step.context.store.messages,
{
content,
role: "user",
},
];
}
};
}
Web Stream API for Streaming response
Web Stream is a web standard utilized in many modern web frameworks and libraries (like React 19, Deno, Node 22). We have migrated streaming responses to Web Stream to ensure broader compatibility.
For instance, you can use the streaming response in a simple HTTP Server:
import { createServer } from "http";
import { OpenAIAgent } from "llamaindex";
import { OpenAIStream, streamToResponse } from "ai";
import { tools } from "./tools";
const agent = new OpenAIAgent({
tools: [...tools],
});
const server = createServer(async (req, res) => {
const response = await agent.chat({
message: "What is weather today?",
stream: true,
});
// Transform the response into a string readable stream
const stream: ReadableStream<string> = response.pipeThrough(
new TransformStream({
transform: (chunk, controller) => {
controller.enqueue(chunk.response.delta);
},
}),
);
// Pipe the stream to the response
streamToResponse(stream, res);
});
server.listen(3000);
Or it can be integrated into React Server Components (RSC) in Next.js:
// app/actions/index.tsx
"use server";
import { createStreamableUI } from "ai/rsc";
import { OpenAIAgent } from "llamaindex";
import type { ChatMessage } from "llamaindex/llm/types";
export async function chatWithAgent(
question: string,
prevMessages: ChatMessage[] = [],
) {
const agent = new OpenAIAgent({
tools: [],
});
const responseStream = await agent.chat({
stream: true,
message: question,
chatHistory: prevMessages,
});
const uiStream = createStreamableUI(<div>loading...</div>);
responseStream
.pipeTo(
new WritableStream({
start: () => {
uiStream.update("response:");
},
write: async (message) => {
uiStream.append(message.response.delta);
},
}),
)
.catch(uiStream.error);
return uiStream.value;
}
// app/src/page.tsx
"use client";
import { chatWithAgent } from "@/actions";
import type { JSX } from "react";
import { useFormState } from "react-dom";
export const runtime = "edge";
export default function Home() {
const [state, action] = useFormState<JSX.Element | null>(async () => {
return chatWithAgent("hello!", []);
}, null);
return (
<main>
{state}
<form action={action}>
<button>Chat</button>
</form>
</main>
);
}
Improvement in LlamaIndexTS v0.3.0
Better TypeScript support
We have made significant improvements to the type system to ensure that all code is thoroughly checked before it is published. This ongoing enhancement has already resulted in better module reliability and developer experience.
For example, we have improved FunctionTool
type with generic support:
type Input = {
a: number;
b: number;
};
const sumNumbers = FunctionTool.from<Input>(
({ a, b }) => `${a + b}`, // a and b will be checked as number
// JSON schema will be an error if you type wrong.
{
name: "sumNumbers",
description: "Use this function to sum two numbers",
parameters: {
type: "object",
properties: {
a: {
type: "number",
description: "The first number",
},
b: {
type: "number",
description: "The second number",
},
},
required: ["a", "b"],
},
},
);
Better Next.js, Deno, Cloudflare Worker, and Waku(Vite) support
In addition to Node.js, LlamaIndexTS now offers enhanced support for Next.js, Deno, and Cloudflare Workers, making it more versatile across different platforms.
For now, you can install llamaindex and directly import it into your existing Next.js, Deno or Cloudflare Worker project without any extra configuration.