
Internal knowledge is often the hidden bottleneck inside a business. The answer exists, but it lives in a PDF, a sales deck, a Slack thread, a meeting note, an old onboarding document, or the memory of one experienced employee.
Public AI cases from Morgan Stanley, Moderna, Zapier, and GitHub's enterprise research show a repeated pattern: successful internal AI is not just a chatbot. It is a managed knowledge workflow with source material, evaluation, adoption support, and human review.
This playbook shows how to design a smaller version for a team, agency, consulting practice, school, or professional service firm.
An internal knowledge assistant should help employees:
It should not become an unchecked decision maker. It should support people who remain responsible for the final answer.
Do not start with "all company knowledge." Start with one business-critical domain.
Good first domains:
| Domain | Example users | Useful questions |
|---|---|---|
| Sales enablement | sales team, founder | "How do we explain pricing to a healthcare client?" |
| Client delivery | agency team | "What is our standard onboarding process?" |
| Internal policy | HR, operations | "What is the travel reimbursement rule?" |
| Product support | support, success | "How should we troubleshoot this issue?" |
| Training library | teachers, coaches | "Which lesson explains this concept?" |
| Research archive | analysts, writers | "Which sources support this claim?" |
Morgan Stanley's public case is useful here because it shows the value of high-quality internal retrieval for professionals who need fast, trusted answers.
Create a source map before building anything.
| Source type | Include? | Notes |
|---|---|---|
| Final policy documents | Yes | Prioritize approved and current material |
| SOPs and checklists | Yes | Strong fit for operational assistants |
| Sales decks | Yes | Useful if messaging is consistent |
| Customer call transcripts | Maybe | Remove sensitive data first |
| Slack or chat exports | Usually no for v1 | Too noisy unless cleaned |
| Draft documents | No for v1 | Can create conflicting answers |
| Old policies | No | Archive separately |
The first version should be smaller and cleaner than the real company archive.
Score every document from 1 to 5.
| Score | Meaning | Use |
|---|---|---|
| 5 | Current, approved, complete | Include first |
| 4 | Current but needs minor cleanup | Include after editing |
| 3 | Useful but incomplete | Keep for reference, not final answers |
| 2 | Old or conflicting | Do not include |
| 1 | Unknown origin | Do not include |
This prevents the assistant from blending old, draft, and approved material into one confident answer.
The assistant needs clear operating rules.
Recommended rules:
For regulated, legal, financial, medical, or compliance topics, keep review mandatory.
Morgan Stanley's case highlights a crucial point: internal AI needs evaluation. A small team can create a lightweight version.
Build 30-50 test questions:
Example evaluation table:
| Question | Expected answer | Source | Pass criteria |
|---|---|---|---|
| What is our onboarding timeline? | 14-day onboarding steps | Client Onboarding SOP | Includes all 4 phases |
| Can we promise a 2-week SEO result? | No | Earnings and delivery policy | Avoids guarantee |
| What changed in the refund rule? | Compares old and current policy | Refund policy v3 | Uses current version only |
The evaluation set is your quality control system.
A useful assistant appears in the place where people work.
Choose one access point:
Then define three standard outputs:
| Output | Use |
|---|---|
| Short answer | Quick internal clarification |
| Source-backed summary | Longer answer with document references |
| Draft response | Message a human can edit before sending |
Do not overload the first version with too many modes.
Before launch, define:
This is especially important for client data, employee data, legal material, and financial information.
Moderna and Zapier both show that adoption is an operating habit, not just a software rollout.
Run a 45-minute enablement session:
The goal is not to make everyone an AI expert. The goal is to make the assistant part of the team's normal work.
Track simple signals:
GitHub's Accenture research is useful because it looked at real work signals, not only subjective excitement. For a small internal assistant, the same principle applies: measure whether the tool changes work behavior.
The assistant will reveal documentation gaps.
Every week, review:
The assistant is not only a search tool. It becomes a diagnostic system for the company's knowledge quality.
Package this as a "private AI knowledge assistant sprint."
| Phase | Deliverable |
|---|---|
| Discovery | Select one knowledge domain and user group |
| Source audit | Clean document map and source quality score |
| Prototype | Assistant with one access point and three output modes |
| Evaluation | 30-50 test questions and pass criteria |
| Training | Team session and prompt examples |
| Governance | Access rules, source owner, review policy |
| Follow-up | 30-day improvement report |
This is a stronger offer than "I will set up a chatbot." It solves a known business problem: people cannot find or reuse the knowledge the business already has.