Coda Drive HubSpot Slack Notion Asana
live
Sign out

Your company, all in one place

Ask me anything about Curacity — products, processes, pricing, clients, team, or policies. I pull from all connected sources in real time.

Coda FAQs & Docs
Google Drive
HubSpot CRM
Slack
Notion
Asana
Gmail
Google Calendar

How Curacity Brain works

One Claude API call, seven connected data sources, and a dynamically built system prompt. Here's the full picture.

👤
Employee
asks a question
🔄
MCP servers
live context pull
📝
System prompt
assembled dynamically
🧠
Claude
claude-sonnet-4
01

MCP servers fetch live context before every response

Instead of a static system prompt, this bot queries your actual live tools — Coda, Google Drive, HubSpot, Notion, Asana — at request time via MCP (Model Context Protocol) servers. Claude reads the fresh data and synthesizes an answer grounded in what's actually true today.

// MCP servers passed to each API call mcp_servers: [ { type: "url", url: "https://coda.io/apis/mcp", name: "coda" }, { type: "url", url: "https://mcp.notion.com/mcp", name: "notion" }, { type: "url", url: "https://mcp.hubspot.com/anthropic", name: "hubspot" }, { type: "url", url: "https://mcp.asana.com/v2/mcp", name: "asana" }, { type: "url", url: "https://gcal.mcp.claude.com/mcp", name: "gcal" } ]
02

The system prompt is your company's identity, hard-coded once

We pre-seeded the system prompt with everything we learned about Curacity from scanning your sources: what VISTA Core and VISTA AI are, how attribution works, your pricing model, your policies. This is the "always true" layer. The MCP tools add the "latest" layer on top of it at runtime.

03

Claude decides which tool to call based on the question

When someone asks about hotel clients, Claude searches HubSpot. FAQs → Coda. Policies → Google Drive. Active sprint → Coda Product hub. Projects → Asana. Claude reasons about the question and picks the right source. It can call multiple sources for complex questions and synthesize them together.

// The API call structure const res = await fetch("https://api.anthropic.com/v1/messages", { method: "POST", body: JSON.stringify({ model: "claude-sonnet-4-20250514", max_tokens: 2000, system: CURACITY_SYSTEM_PROMPT, messages: conversationHistory, mcp_servers: MCP_SERVERS // ← this is the magic }) });
04

Conversation history = memory across turns

Claude has no persistent memory by default. We maintain a history array in the browser and send the full thread with each call, so follow-up questions work naturally: "tell me more about that" or "what's our policy on the second point you mentioned" all resolve correctly.

05

For production: proxy the API call through your backend

This demo calls the Anthropic API directly from the browser. In production, route the call through a backend endpoint (Node.js, Python, etc.) so your API key stays server-side and you can add auth, rate limits, and logging.

💡 Recommended: add these sources next

  • Slack — surface institutional knowledge from channel conversations and thread answers
  • Confluence — if any teams keep SOPs or runbooks there vs. Coda
  • PandaDocs — contract templates are referenced in your Drive FAQs but not yet connected
  • Looker / BI dashboards — anomaly detection reports already run there per your Coda onboarding docs
  • A dedicated "Brain" doc in Coda — one curated, structured doc that the team actively maintains as the canonical source of truth; the bot can prioritize it over scattered docs