Connect
Add Sprigr to your AI client’s MCP config. One JSON block with your API key and endpoint. No SDK to install, no server to run.
Give your AI agent searchable cloud storage in one config block. Sprigr connects via the Model Context Protocol so agents can store JSON objects, build indexes, and run full-text or hybrid semantic search queries. No custom code, no vector database, no infrastructure to manage.
Start Free — No Credit CardThree steps from zero to searchable data inside your AI agent.
Add Sprigr to your AI client’s MCP config. One JSON block with your API key and endpoint. No SDK to install, no server to run.
Your agent pushes JSON objects to Sprigr via MCP tool calls. Define searchable attributes and filterable fields. Data is indexed automatically on write.
Run full-text queries with typo tolerance, field filters, and pagination. Results return in milliseconds from the nearest edge node.
Most AI agents lose context between sessions. Bolting on storage usually means writing glue code or managing infrastructure.
Every feature is designed for how AI agents actually work with data.
No provisioning, no database setup, no migrations. Sign up, get an API key, paste the MCP config. Your agent can start storing and searching data within seconds.
Deterministic keyword search with typo tolerance, prefix matching, and field-level boosting. Enable semantic_search on any index to add AI-powered vector search. Results are merged via Reciprocal Rank Fusion for the best of both worlds.
Each API key scopes access to its own data, with optional index-level ACLs to restrict keys to specific indexes. Run separate indexes for different projects, clients, or environments, all from one account. No cross-contamination, no shared state.
No per-query fees, no per-token surcharges, no surprise bills when your agent runs a hundred searches in one session. One monthly price based on record count.
The same data is accessible via both MCP tool calls and a standard REST API. Use MCP for agent workflows, REST for dashboards, scripts, or non-MCP integrations.
Sprigr runs across 300+ edge locations globally. The backend is compiled from Rust for native performance wherever your agent runs.
Add Sprigr to your AI client’s MCP configuration file. Here is the claude_desktop_config.json entry:
{
"mcpServers": {
"sprigr": {
"command": "npx",
"args": ["-y", "@sprigr/mcp-server"],
"env": {
"SPRIGR_API_KEY": "your-api-key"
}
}
}
}
Once connected, your AI agent can create indexes, push records, and search, all through natural conversation. Here is an example session:
You: Create a Sprigr index called "meeting-notes" with
searchable attributes title and content, and a
filterable attribute date.
Agent: Done. Index "meeting-notes" created with 2 searchable
attributes and 1 filter.
You: Store this meeting note. Title: "Q1 Planning",
content: "Agreed to launch MCP integration by March.
Budget approved for two new hires.", date: "2025-01-15"
Agent: Record stored in "meeting-notes" (id: rec_a1b2c3).
You: Search meeting notes for "MCP integration"
Agent: Found 1 result:
• "Q1 Planning" (2025-01-15): "...launch
MCP integration by March..."
No code written. No database provisioned. The agent handles everything through MCP tool calls, and Sprigr handles storage, indexing, and search on the backend.
Common patterns from teams using Sprigr as their agent’s memory layer.
Store documentation, SOPs, and reference material. The agent searches its own knowledge base to answer questions accurately instead of guessing from training data.
Persist key facts and decisions across sessions. Search past conversations by topic, date, or participant instead of re-reading entire transcripts.
Multiple agents share a Sprigr index as a coordination layer. One agent writes research findings; another searches them to draft reports. Same API key, same data.
MCP (Model Context Protocol) is an open standard created by Anthropic that lets AI assistants connect to external tools and data sources through a uniform interface. Instead of writing a custom API integration for every service, an AI agent can use MCP tool calls to interact with any MCP-compatible server. Sprigr exposes search, indexing, and storage as MCP tools, so your agent gets persistent, searchable memory without any custom code.
No external embedding infrastructure is needed. Sprigr provides full-text search with typo tolerance, prefix matching, and field-level filtering out of the box. For indexes where you want semantic understanding, enable semantic_search and Sprigr generates and stores embeddings automatically. No separate embedding API, no vector database to manage. Results are merged via Reciprocal Rank Fusion for the best of keyword and semantic matching.
Records are stored in a distributed database running at the edge. Each record is a JSON object with attributes you define when creating the index. Searchable attributes are full-text indexed; filterable attributes support exact-match and range queries. All data is scoped to your API key, ensuring tenant isolation.
Any client that supports the Model Context Protocol. This includes Claude Desktop, Claude Code (Anthropic’s CLI), Cursor, Windsurf, Cline, and custom agents built with the MCP SDK in TypeScript or Python. For clients that do not support MCP, Sprigr also offers a standard REST API with identical functionality.
Yes. The free plan supports up to 1,000 records and unlimited search queries. It includes full MCP and REST API access with no time limit. Paid plans start at $49 per month for higher record limits, additional indexes, and priority support.
Free for up to 1,000 records. No credit card required.
Start Free — No Credit Card