One Claude API call, seven connected data sources, and a dynamically built system prompt. Here's the full picture.
Instead of a static system prompt, this bot queries your actual live tools — Coda, Google Drive, HubSpot, Notion, Asana — at request time via MCP (Model Context Protocol) servers. Claude reads the fresh data and synthesizes an answer grounded in what's actually true today.
We pre-seeded the system prompt with everything we learned about Curacity from scanning your sources: what VISTA Core and VISTA AI are, how attribution works, your pricing model, your policies. This is the "always true" layer. The MCP tools add the "latest" layer on top of it at runtime.
When someone asks about hotel clients, Claude searches HubSpot. FAQs → Coda. Policies → Google Drive. Active sprint → Coda Product hub. Projects → Asana. Claude reasons about the question and picks the right source. It can call multiple sources for complex questions and synthesize them together.
Claude has no persistent memory by default. We maintain a history array in the browser and send the full thread with each call, so follow-up questions work naturally: "tell me more about that" or "what's our policy on the second point you mentioned" all resolve correctly.
This demo calls the Anthropic API directly from the browser. In production, route the call through a backend endpoint (Node.js, Python, etc.) so your API key stays server-side and you can add auth, rate limits, and logging.