FinalDocFinalDoc
Pricing
Thought Leadership

Why Your AI Should Know Your Whole Knowledge Base, Not Just One Article

April 28, 2026 ยท 5 min read

AI agents grounded in the whole knowledge base

Most AI writing tools work article-by-article. You open a doc, the AI reads the doc, the AI gives you advice about the doc. Helpful, but limited. Ask it "is this section organized well?" and you'll get a generic answer about heading structure โ€” not an answer that takes into account the other 200 articles in your knowledge base.

That's a missed opportunity. Documentation isn't a collection of independent articles. It's a network. The right place for a new section depends on what's already there. The right keywords depend on what other articles already rank for. Whether an FAQ should be written depends on whether one exists.

An AI that doesn't know your KB can't answer those questions. So we changed how Ved AI works: every Specialist now sees your full knowledge base on every turn.

What "knowing the KB" actually means

There are three layers of awareness:

Layer 1: The current article (what most AI tools have)

The AI sees the article you're editing. Title, content, category. Useful for "polish this paragraph" or "shorten this section." This is table stakes.

Layer 2: The taxonomy (the shape of the KB)

The AI sees every category in your KB and every article title. Drafts included, with a [draft] tag. Glossary terms too.

This is cheap to include โ€” a typical KB's category tree plus 80 article titles is ~3 KB of text. But it changes what the AI can answer:

Layer 3: Semantic content search (the deep dive)

The AI runs a vector search over your published articles for content semantically related to what the user just asked. Top results โ€” typically 3-5 article excerpts โ€” are added as context.

This drives the high-leverage answers:

The cost of getting this wrong

Without KB awareness, AI gives advice that sounds right but is wrong in your specific context. Here's what we'd see in the old version:

User: "Should I write an article about API rate limits?"
Old AI: "Yes, API rate limits are an important topic. Here's an outline..."
Reality: There are already three articles covering rate limits in different sections.
User: "Help me restructure my Getting Started section"
Old AI: "Here's a generic Getting Started structure: introduction, prerequisites, first steps..."
Reality: The user already has 14 articles in Getting Started; what they need is a review of those, not a generic template.

The advice isn't wrong. It's just not actually about your knowledge base. It's about knowledge bases in general. That's a much weaker signal.

How we made it work

Three implementation choices that mattered:

1. Always include the taxonomy. Never include all article content.

Categories + article titles cost almost nothing in tokens. We send them on every turn, no question. Article content is much heavier โ€” sending all 200 articles' content would blow the model's context window. We use semantic search to retrieve only the most relevant 3-5 article excerpts per turn.

2. Anchor by text, not by ID

When an agent recommends "see the article 'Setting up custom domains'," we cite by title, not by article ID. This survives renames and lets the agent talk naturally. The user can click through if they want.

3. Drafts are visible in taxonomy, hidden in content search

Drafts show up in the taxonomy block (with a [draft] tag) so structure decisions can include them. But draft content isn't indexed for semantic search until you publish โ€” because drafts may be wrong, incomplete, or out of date, and we don't want the AI confidently citing half-finished work back at you. The publish step is your editorial sign-off saying "this is referenceable."

The bigger pattern

This is part of a broader trend: AI features in software are moving from tool-aware (the AI knows about ChatGPT-style prompting) to workspace-aware (the AI knows about your specific workspace, with all its specifics). The former gives you generic intelligence; the latter gives you specific intelligence.

Generic intelligence is impressive but not differentiating โ€” every tool has it. Specific intelligence is what makes a tool actually fit into your work. And the gap is widening.

If your AI writing tool can't tell you what categories you have, what articles already exist, or whether your draft duplicates something you wrote three months ago, it's not really helping you write documentation. It's helping you write text โ€” and there are a lot of tools that can do that.

Where this lives in FinalDoc

Knowledge-base-aware AI shows up in three places:

All three share the same vector index of your published articles, the same embedding model (text-embedding-3-small, 1536 dimensions), and the same authorization layer that scopes every search to the requesting account.

If you're already using FinalDoc, the Specialists are live in your AI Writer panel today โ€” just click the Specialists tab. If you're not on FinalDoc yet, you can start a free 15-day trial and bring your existing knowledge base in via the import wizard. The taxonomy + semantic search kick in as soon as your articles are imported and embedded โ€” typically within 30 seconds for a small KB, a few minutes for a large one.

โ† Back to Blog