Introducing Charcoal
We’re excited to announce that Charcoal has raised $2.5M in pre-seed funding, led by Mischief, with participation from Weekend Fund, First Harmonic, and notable angels across the industry.
Retrieval is broken
If you’re building AI agents that work over internal data, you’ve likely hit the same wall. Today’s retrieval options force you into a lose-lose tradeoff:
- One-shot RAG: a single embedding search + reranker. Fast and cheap, but too shallow to give your agent the context it needs to reason well.
- Agentic search: more accurate, but painfully slow and expensive. And it’s fragile: circular reasoning, bad query planning, unpredictable latency, and unreliable tool-calling make it hard to ship with any real sense of confidence.
Neither option scales. You get stuck at demos and end up spending countless engineering cycles patching retrieval instead of building your product.
We built Charcoal because we were tired of the same tradeoff. We wanted to ship features, not endlessly tune our search pipeline.
What we’re building
Charcoal is the retrieval API purpose-built for your agent’s specific use case. We handle data ingestion, train a retrieval model tailored to your domain using RL, and expose it through a single API. From your perspective, you just upload your documents and get world-class search.
What that looks like in practice:
- Objective-driven retrieval: your agent states what it’s trying to do (in natural language) instead of guessing search keywords.
- High-signal excerpts: we return rich but compact context designed for the next reasoning step.
- A single retrieval call works out-the-box even for complex, multi-document lookups.
Our retrieval API scales seamlessly from single-hop searches to deep multi-step research. And the feedback from our customers is clear: when retrieval just works, the bottleneck shifts from “what can my agent do?” to “how can I give my agent more work?”
Ready to stop patching retrieval and start shipping? Come talk to us.
- Robert and Hen