Build an Internal AI Knowledge Assistant Playbook

A practical playbook for building private AI knowledge assistants, inspired by public cases from Morgan Stanley, Moderna, Zapier, and GitHub's enterprise AI research.
May 14, 2026
Build an Internal AI Knowledge Assistant Playbook
AiToMake content is for education and research. Any income figures are examples or reported references, not guarantees. Results vary based on skill, effort, market conditions, and execution quality.

Build an Internal AI Knowledge Assistant Playbook

Internal knowledge is often the hidden bottleneck inside a business. The answer exists, but it lives in a PDF, a sales deck, a Slack thread, a meeting note, an old onboarding document, or the memory of one experienced employee.

Public AI cases from Morgan Stanley, Moderna, Zapier, and GitHub's enterprise research show a repeated pattern: successful internal AI is not just a chatbot. It is a managed knowledge workflow with source material, evaluation, adoption support, and human review.

This playbook shows how to design a smaller version for a team, agency, consulting practice, school, or professional service firm.

What this assistant should do

An internal knowledge assistant should help employees:

  • find approved internal information
  • summarize long documents
  • compare policies or procedures
  • draft answers based on source material
  • create onboarding explanations
  • prepare meeting notes or follow-ups
  • identify missing documentation

It should not become an unchecked decision maker. It should support people who remain responsible for the final answer.

Step 1: Pick one knowledge domain

Do not start with "all company knowledge." Start with one business-critical domain.

Good first domains:

DomainExample usersUseful questions
Sales enablementsales team, founder"How do we explain pricing to a healthcare client?"
Client deliveryagency team"What is our standard onboarding process?"
Internal policyHR, operations"What is the travel reimbursement rule?"
Product supportsupport, success"How should we troubleshoot this issue?"
Training libraryteachers, coaches"Which lesson explains this concept?"
Research archiveanalysts, writers"Which sources support this claim?"

Morgan Stanley's public case is useful here because it shows the value of high-quality internal retrieval for professionals who need fast, trusted answers.

Step 2: Inventory the source material

Create a source map before building anything.

Source typeInclude?Notes
Final policy documentsYesPrioritize approved and current material
SOPs and checklistsYesStrong fit for operational assistants
Sales decksYesUseful if messaging is consistent
Customer call transcriptsMaybeRemove sensitive data first
Slack or chat exportsUsually no for v1Too noisy unless cleaned
Draft documentsNo for v1Can create conflicting answers
Old policiesNoArchive separately

The first version should be smaller and cleaner than the real company archive.

Step 3: Create a source quality score

Score every document from 1 to 5.

ScoreMeaningUse
5Current, approved, completeInclude first
4Current but needs minor cleanupInclude after editing
3Useful but incompleteKeep for reference, not final answers
2Old or conflictingDo not include
1Unknown originDo not include

This prevents the assistant from blending old, draft, and approved material into one confident answer.

Step 4: Define answer rules

The assistant needs clear operating rules.

Recommended rules:

  • answer only from approved source material
  • cite or name the source document when possible
  • say when the source is missing or unclear
  • ask a clarifying question if the user's question is broad
  • do not invent policies, prices, legal terms, or commitments
  • produce drafts for review, not final external statements
  • escalate sensitive topics to the document owner

For regulated, legal, financial, medical, or compliance topics, keep review mandatory.

Step 5: Build an evaluation set

Morgan Stanley's case highlights a crucial point: internal AI needs evaluation. A small team can create a lightweight version.

Build 30-50 test questions:

  • 10 easy factual questions
  • 10 comparison questions
  • 10 summary questions
  • 5 edge cases
  • 5 questions the assistant should refuse or escalate
  • 5 questions with missing source material

Example evaluation table:

QuestionExpected answerSourcePass criteria
What is our onboarding timeline?14-day onboarding stepsClient Onboarding SOPIncludes all 4 phases
Can we promise a 2-week SEO result?NoEarnings and delivery policyAvoids guarantee
What changed in the refund rule?Compares old and current policyRefund policy v3Uses current version only

The evaluation set is your quality control system.

Step 6: Design the user workflow

A useful assistant appears in the place where people work.

Choose one access point:

  • internal chat tool
  • private web page
  • help desk sidebar
  • Notion or knowledge base page
  • shared workspace
  • CRM note helper

Then define three standard outputs:

OutputUse
Short answerQuick internal clarification
Source-backed summaryLonger answer with document references
Draft responseMessage a human can edit before sending

Do not overload the first version with too many modes.

Step 7: Add privacy and access rules

Before launch, define:

  • who can access the assistant
  • which documents are allowed
  • which documents are excluded
  • whether customer data can be used
  • how outputs should be reviewed
  • who owns source updates
  • how logs are handled

This is especially important for client data, employee data, legal material, and financial information.

Step 8: Train people with real examples

Moderna and Zapier both show that adoption is an operating habit, not just a software rollout.

Run a 45-minute enablement session:

  1. show three good questions
  2. show two bad questions
  3. show how to verify a source
  4. show when to escalate
  5. ask each team member to bring one real workflow
  6. collect the best examples into a shared prompt library

The goal is not to make everyone an AI expert. The goal is to make the assistant part of the team's normal work.

Step 9: Measure practical adoption

Track simple signals:

  • weekly active users
  • questions asked
  • answers copied or used
  • edits required
  • failed or escalated questions
  • missing document topics
  • time saved in repeated workflows
  • user-reported confidence

GitHub's Accenture research is useful because it looked at real work signals, not only subjective excitement. For a small internal assistant, the same principle applies: measure whether the tool changes work behavior.

Step 10: Improve the source base every week

The assistant will reveal documentation gaps.

Every week, review:

  • questions with no answer
  • questions with low-confidence answers
  • documents that conflict
  • policies people still ask about repeatedly
  • source files that need rewriting
  • prompts that produce better outputs
  • workflows that should become templates

The assistant is not only a search tool. It becomes a diagnostic system for the company's knowledge quality.

A consulting offer you can build from this

Package this as a "private AI knowledge assistant sprint."

PhaseDeliverable
DiscoverySelect one knowledge domain and user group
Source auditClean document map and source quality score
PrototypeAssistant with one access point and three output modes
Evaluation30-50 test questions and pass criteria
TrainingTeam session and prompt examples
GovernanceAccess rules, source owner, review policy
Follow-up30-day improvement report

This is a stronger offer than "I will set up a chatbot." It solves a known business problem: people cannot find or reuse the knowledge the business already has.

Sources and further reading

Share this story
Build an Internal AI Knowledge Assistant Playbook