chat sdk brings agents to your users
Author
Uzair Ahmad
Published
March 31, 2026
Read Time
1 min read
Views
1
Rating
4.8
Back in early January, we threw a challenge at the whole company: find a way to 10x your actual output.
So people started building agents. Not just generic chatbots — these were focused, purpose-built helpers that took over all the boring, repetitive stuff that usually eats up hours.
At first, everyone created their own little interfaces for these agents. The AI SDK made it pretty straightforward with ready-made model connections and simple UI components.
But soon we ran into a wall. Everyone wanted to use their agents directly inside Slack. That meant every team suddenly had to learn how to connect to Slack’s API.
Then things got even messier. Once the agents were living in Slack, people started asking for the same thing on Discord, GitHub, Linear, and a bunch of other tools. Every new platform meant building yet another integration from scratch.
That’s when it clicked for us. Instead of forcing people to come to the agents… we needed to bring the agents to wherever people were already working.
Chat needs a smarter integration layer
We had made it super easy for our teams to build agents, but extending them to work everywhere turned out to be the hard part.
And this isn’t just our problem — it’s every company’s reality. Teams are already living in Microsoft Teams, WhatsApp, Telegram, Google Chat, and a dozen other messaging apps. If your agents aren’t there, people simply won’t use them.
That’s exactly why we built the Chat SDK.
Just like the AI SDK gave us one clean way to talk to any AI model, the Chat SDK does the same for messaging platforms. It hides all the messy, platform-specific quirks and gives developers (and their coding agents) a simple, unified framework to connect anywhere.
import { streamText } from "ai";
const result = await streamText({
model: "anthropic/claude-opus-4.6", // swap out the provider
prompt: "Hello world",
});
..("AI SDK abstracts away individual provider logic, making provider and model changes a simple string change.").
Developers no longer need to think about the way streaming might differ from one platform to the next, or how formatting, branching logic, or even reaction-handling should be tackled for individual APIs.
Write once, deploy everywhere
Chat SDK is a TypeScript library for building bots that run across Slack, Microsoft Teams, Google Chat, Discord, Telegram, GitHub, and Linear — all from a single codebase. The core chat package manages event routing and application logic. Platform-specific behavior is handled by adapters, so your handlers stay exactly the same no matter where you deploy.
Here's what a basic bot looks like:
import { Chat } from "chat";
import { createSlackAdapter } from "@chat-adapter/slack";
import { createRedisState } from "@chat-adapter/state-redis";
const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
},
state: createRedisState(),
});
bot.onNewMention(async (thread) => {
await thread.subscribe();
await thread.post("Hello! I'm listening to this thread now.");
});
bot.onSubscribedMessage(async (thread, message) => {
await thread.post(`You said: ${message.text}`);
});
Each adapter automatically picks up credentials from environment variables, so you can get started without any extra setup. Switching from Slack to Discord is as simple as swapping the adapter — no need to rewrite your bot.
Platform inconsistencies, handled
Platforms behave very differently from each other, and Chat SDK doesn’t hide those differences with fake promises. Instead, it handles them inside the adapter layer so your main application code stays clean.
Take streaming, for example. Slack has a native streaming path that renders bold, italic, lists, and other formatting in real time as the response arrives. Other platforms use a fallback streaming path, passing streamed text through each adapter’s markdown-to-native conversion pipeline at every step.
Before Chat SDK, those adapters received raw markdown strings, so users on Discord or Teams would see literal bold syntax until the final message was complete. Now that conversion happens automatically.
Table rendering follows the same pattern. The Table() component gives you a clean, composable API for rendering tables across every adapter. Pass in headers and rows, and Chat SDK figures out the rest. Slack renders Block Kit table blocks. Teams and Discord use GFM markdown tables. Google Chat uses monospace text widgets. Telegram converts tables to code blocks. GitHub and Linear continue to use their existing markdown pipelines.
import { Table } from "chat";
await thread.post(
<Table
headers={["Name", "Status", "Region"]}
rows={[
["api-prod", "healthy", "iad1"],
["api-staging", "degraded", "sfo1"],
]}
/>
);
Cards, modals, and buttons work the same way. You write the element once using JSX, and each adapter renders it in whatever format the platform supports natively. If a platform doesn’t support a particular element, it falls back gracefully.
Why Chat SDK matters even for single platforms
Even if your agent is only targeting Slack, Chat SDK still solves real problems. Channel and user names are automatically converted to clear text so your agent actually understands the context of the conversation.
This translation works both ways. When the agent mentions someone using clear text, Chat SDK makes sure the notification actually triggers in Slack.
Agents need full context to be truly effective. Chat SDK automatically includes link preview content, referenced posts, and images directly in the agent’s prompts. Plus, while models generate standard markdown, Slack doesn’t support it natively. Chat SDK converts standard markdown to Slack’s variant automatically — and this happens in real time, even with Slack’s native append-only streaming API.
AI streaming, built in
The post() function accepts an AI SDK text stream directly, which means you can pipe a streaming LLM response to any chat platform without any extra wiring:
import { streamText } from "ai";
bot.onNewMention(async (thread) => {
await thread.subscribe();
const result = await streamText({
model: "anthropic/claude-sonnet-4",
prompt: "Summarize what's happening in this thread.",
});
await thread.post(result.textStream);
});
The adapter layer handles all the platform-specific rendering of that stream, including live formatting wherever the platform supports it.
State that scales
Thread subscriptions, distributed locks, and key-value cache state are handled through pluggable state adapters. Redis and ioredis have been available since launch. PostgreSQL is now fully supported as a production-ready option, so teams already using Postgres can persist bot state without adding Redis.
import { createPostgresState } from "@chat-adapter/state-postgres";
import { createSlackAdapter } from "@chat-adapter/slack";
import { Chat } from "chat";
const bot = new Chat({
userName: "mybot",
adapters: {
slack: createSlackAdapter(),
},
state: createPostgresState(),
});
The PostgreSQL adapter uses pg (node-postgres) with raw SQL and automatically creates the required tables on first connect. It supports TTL-based caching, distributed locking across multiple instances, and namespaced state via a configurable key prefix.
WhatsApp, and beyond
Chat SDK now supports WhatsApp, extending the write-once model to one of the largest messaging platforms in the world.
The WhatsApp adapter supports messages, reactions, auto-chunking, read receipts, multi-media downloads (images, voice messages, stickers), and location sharing with Google Maps URLs. Cards render as interactive reply buttons with up to three options, falling back to formatted text where needed.
import { createWhatsAppAdapter } from "@chat-adapter/whatsapp";
import { Chat } from "chat";
const bot = new Chat({
userName: "mybot",
adapters: {
whatsapp: createWhatsAppAdapter(),
},
state: createRedisState(),
});
bot.onNewMention(async (thread) => {
await thread.post("Hello from WhatsApp!");
});
Note that WhatsApp enforces a 24-hour messaging window, so bots can only respond within that period. The adapter does not support message history, editing, or deletion.
Getting started
To augment your coding agents, install the Chat skill:
npx skills add vercel/chat
This gives your agent access to Chat SDK’s documentation, patterns, and best practices so it can help you build bots against the SDK.
You can also use and modify this starter prompt:
Migrate this agent to the Vercel Chat SDK, consolidating all platform-specific logic (Slack, Discord, GitHub, etc.) into a single unified implementation where core behavior is defined once and adapters handle platform differences. Remove duplicated integration logic and refactor to a clean “write once, deploy everywhere” architecture using Chat SDK as the abstraction layer. Use best practices from: npx skills add vercel/chat.
The Chat SDK documentation covers getting started, platform adapter setup, state configuration, and guides for common patterns including a Slack bot with Next.js and Redis, a Discord support bot with Nuxt, and a GitHub code review bot with Hono.
Chat SDK is open source and in public beta. The agents your team has been building don’t have to live on just one platform. They can go wherever your users actually are.