Kirha planning

Private Search infrastructure

Every private data provider we integrate is exposed as an MCP server. Conceptually, we’re building a human knowledge tree, where “verticals” (e.g., crypto, insurance) appear as branches, and MCP servers are the leaves. We’ve developed a custom MCP SDK to specify prices and typed outputs for every tool. With TypeScript and clear guidelines, 75% of our MCP servers are now generated by coding AI agents from APIs documentation.

Expanding this knowledge tree to a usable scale would have been unrealistic otherwise.

But how is this useful to your AI agents? Isn’t finding what they need in this tree like looking for a needle in a haystack?

Knowledge Tree Diagram
*Figure: Gource shows software projects as trees with branches as folders, and leaves as files. You get the analogy.

Tool planning

We’re continuously fine-tuning our planning model to find the optimal path within our knowledge tree for any given prompt. This optimization takes into account data quality, latency, and cost. By specifying the output of MCP tools, we enable efficient composition (using the output of an MCP tool as input of the next one) for maximum performance.

Because our planning process is nearly deterministic, we can decouple tool planning from prompt execution. This allows AI agents to have full control over whether to accept or reject an execution. If an execution is accepted, it will use the same provider(s) and incur the same costs as output by the planning.