A repeatable, governed workflow can reduce the marginal cost of creating high-quality internal knowledge assets to a figure like $18, while improving reuse and compliance. The shift comes from treating knowledge as a managed system, using retrieval to ground drafts in approved sources, and measuring cost-per-asset and reuse. The result is faster decision support, lower rework, and safer use of AI.
Definition
What does “cost of knowledge creation” mean in an enterprise?
The cost of knowledge creation is the total effort required to produce a usable knowledge asset that someone else can trust and apply. A knowledge asset can be a policy explainer, product brief, SOP, call-handling guide, or executive summary. The cost includes labour time, review time, tooling, approvals, and rework.
In most organisations, cost is inflated by hidden friction. Teams recreate the same explanations across functions. Content lives in slides, emails, and chat threads that are hard to find. Search works poorly when naming is inconsistent. That makes knowledge creation a repeated expense instead of a one-time investment. Standards-based knowledge management treats this as a system problem, not a writing problem. ISO 30401 frames knowledge management as a management system with continual improvement, not a one-off repository build.¹
Context
Why is knowledge creation so expensive in practice?
Knowledge work often fails at “flow”. People lose time locating the right material, validating it, and adapting it for a specific audience. In a survey of 982 knowledge workers, respondents estimated they spend hours each week looking for or requesting needed information.² Even when the information exists, uncertainty about “what is current” triggers extra review cycles.
Privacy and security constraints add cost when content creation touches personal information. Australian regulators are clear that privacy obligations still apply when organisations use commercially available AI products.³ In practical terms, that means teams need controls for what data goes in, where outputs go, and how decisions are documented.
The opportunity is to shift from ad hoc writing to an engineered “knowledge production line” with quality controls. That is consistent with management-system thinking in ISO 30401¹ and information protection practice aligned to ISO/IEC 27001.⁴
Mechanism
How does an “$18 article” workflow work?
“$18” is a useful target for marginal cost, not a guarantee. It assumes you already have a governed knowledge base, templates, and a review loop that scales. The mechanism has four layers.
First, define what “done” means. A knowledge asset needs an owner, purpose, audience, expiry date, and traceable sources. ISO 30401 emphasises establishing and improving the system that enables value creation through knowledge.¹ ISO 9001’s quality management logic strengthens this by treating defects as process signals, not individual failures.⁵
Second, ground drafts in approved sources using retrieval-augmented generation (RAG). RAG retrieves relevant passages from a controlled corpus and feeds them to a language model to generate text. The core approach is well documented in the original RAG research.⁶ This reduces dependence on what a model “remembers” and increases traceability because the retrieved passages can be logged.
Third, enforce a human-in-the-loop review that checks claims, risk, and fit for purpose. This review should be structured. Hallucinations remain a known failure mode for large language models, including in high-stakes domains.⁷ A disciplined checklist and required citations per claim reduces risk by design, not by hope.
Fourth, reuse becomes the engine of efficiency. Each asset is modular and tagged so it can be assembled into multiple outputs. Over time, the cost shifts from “writing” to “maintaining”, which is cheaper and more predictable.
Comparison
What changes versus traditional writing and knowledge management?
Traditional knowledge creation behaves like bespoke production. A subject matter expert drafts. A manager reviews. A compliance team edits late. The work restarts when products change. Costs stay high because the workflow creates little reusable structure.
A managed workflow treats content as a product. It uses templates, a controlled vocabulary, and explicit acceptance criteria. It uses retrieval to reduce time spent searching and to improve consistency. It also produces measurable artefacts: source logs, review history, and expiry controls.
AI makes the drafting step faster, but it does not remove governance. Research and surveys summarised by major reviews show that hallucination mitigation requires multiple techniques, not a single prompt tweak.⁸ In legal research, RAG can reduce hallucinations compared to general-purpose generation, but errors remain substantial, which reinforces the need for review and provenance.⁹
The key difference is not “AI writes faster”. The key difference is “the organisation builds a repeatable, auditable knowledge system”.
Applications
Where do Products & Tools create the fastest Efficiency gains?
The fastest gains come from high-volume, repeated explanations with clear factual grounding. These use cases also fit contact centres, CX operations, and internal enablement.
Customer-facing operational knowledge: troubleshooting steps, service eligibility rules, and “what to do next” scripts.
Policy and compliance explainers: privacy notices, consent handling, complaints processes, and escalation criteria.
Executive-ready summaries: weekly risk briefs, incident summaries, and program status narratives.
Training assets: onboarding playbooks and micro-learning modules that map to role outcomes.
To operationalise this, use a knowledge platform that supports controlled ingestion, retrieval, governance, and measurement. For example, Customer Science’s Knowledge Quest product is positioned for knowledge workflows that prioritise findability and repeatability: https://customerscience.com.au/csg-product/knowledge-quest/
The business case should be expressed as a unit economics problem. If one modular asset prevents even a small number of repeated “re-explanations”, the payback can be rapid. Evidence from knowledge management research links effective KM to improved organisational performance and efficiency, though outcomes depend on implementation quality and context.¹⁰˒¹¹
Risks
What are the main risks of lowering knowledge creation costs?
The first risk is false confidence. Fluent text can hide errors. Large language model hallucinations are well documented, including taxonomies of causes and mitigation strategies.⁷˒⁸ A low-cost pipeline must include mandatory provenance and verification.
The second risk is privacy and data leakage. The OAIC guidance for organisations using commercially available AI products highlights that Privacy Act obligations apply wherever personal information is involved.³ This affects prompts, training data, storage, and vendor selection. Controls must define prohibited inputs, approved workspaces, retention, and incident response.
The third risk is governance drift. Without ownership, expiry, and review triggers, knowledge assets decay. ISO/IEC 27001 provides a structured frame for protecting information assets through risk assessment and continual improvement.⁴ ISO 30401 similarly expects organisations to review and improve the knowledge management system.¹
The practical risk response is to embed governance into the workflow. Do not treat governance as a final sign-off step.
Measurement
How do you prove the “$18 article” outcome is real?
Start with unit metrics that executives can trust.
Cost-per-asset: (labour minutes × loaded rate) + marginal tool cost. Track separately for drafting, verification, and approval.
Cycle time: request-to-publish and request-to-first-draft.
Reuse rate: number of downstream uses per asset, including call guides, training, and executive summaries.
Search success: time-to-right-answer for a defined set of tasks, measured with user testing.
Defect rate: post-publication corrections per asset, categorised by severity.
Risk controls: percentage of assets with attached sources, owner, and expiry date.
Use external baselines to frame the opportunity. If staff spend hours each week searching for information², a measurable reduction in “time-to-right-answer” becomes a credible value driver. In parallel, align your AI risk controls to NIST’s AI Risk Management Framework to document how you identify, assess, and manage AI risks across the lifecycle.¹²
For organisations that want an operating model and governance capability, a managed approach through CX and professional services can accelerate setup and measurement: https://customerscience.com.au/service/cx-consulting-and-professional-services/
Next Steps
What is a practical 30–60 day implementation path?
Week 1–2: Define the first “knowledge product”. Pick one high-volume domain. Define acceptance criteria, owners, and the minimum metadata. Map privacy constraints based on OAIC guidance.³
Week 3–4: Build the corpus and retrieval layer. Ingest approved sources only. Use chunking rules that preserve meaning. Implement RAG so drafts are grounded in retrieved passages.⁶ Add a verification checklist aligned to known hallucination risks.⁷˒⁸
Week 5–6: Run a controlled pilot. Measure cycle time, cost-per-asset, and search success against a baseline. Use defects as process signals. Align governance to ISO 30401’s system approach.¹
Week 7–8: Scale by reuse. Convert successful assets into modular components. Publish a “single source of truth” pattern. Make reuse visible with dashboards and internal incentives.
This path also aligns to Australia’s AI Ethics Principles, which stress safety, security, reliability, and human-centred outcomes.¹³
Evidentiary Layer
Why this approach holds up under scrutiny
Retrieval improves grounding. The original RAG work demonstrates stronger performance on knowledge-intensive tasks by combining parametric generation with external retrieval.⁶ That matters because enterprises need traceability and rapid updates when policies change.
Risk remains real. Large-scale surveys of hallucinations in LLMs describe why errors occur and why mitigation must be layered, including retrieval, verification, and process controls.⁷˒⁸ Domain evaluations show RAG can reduce hallucinations but does not eliminate them, reinforcing the need for review gates and provenance.⁹
Governance is not optional in Australia. OAIC guidance clarifies privacy obligations when organisations use AI products that involve personal information.³ Organisations should treat this as a design constraint that shapes the workflow, tooling selection, and audit evidence.
FAQ
What is the simplest way to calculate the marginal cost of one knowledge asset?
Multiply the verified labour minutes by the loaded hourly rate, then add marginal tool cost. Track draft, verification, and approval separately so you can remove bottlenecks over time.
Does AI remove the need for subject matter experts?
AI reduces drafting time. It does not remove accountability. Experts still define acceptance criteria and validate claims because hallucinations remain a documented risk.⁷
How do we keep outputs compliant with privacy obligations?
Start with data minimisation, approved workspaces, retention controls, and vendor assessment. The OAIC guidance is a practical reference for organisations using commercial AI products.³
What platform capabilities matter most for enterprise knowledge workflows?
You need controlled ingestion, strong search and retrieval, provenance, ownership and expiry, and measurement. A product such as Customer Science Insights is positioned around measurable CX and knowledge performance: https://customerscience.com.au/csg-product/customer-science-insights/
How do we reduce hallucinations in practice?
Use retrieval grounded in approved sources, require citations for claims, apply a structured review checklist, and log source passages used for each output. RAG supports this grounding approach.⁶
When does “$18” become realistic?
When reuse is high, review is standardised, and the corpus is governed. The first assets cost more because you build the system. Costs fall as templates, retrieval, and quality gates stabilise.¹
Sources
ISO. ISO 30401:2018 Knowledge management systems. https://www.iso.org/standard/68683.html
APQC. “Survey Finds One Quarter of Knowledge Workers’ Time Lost Due to Information Flow Challenges.” https://www.apqc.org/about-apqc/news-press-release/apqc-survey-finds-one-quarter-knowledge-workers-time-lost-due
Office of the Australian Information Commissioner (OAIC). Guidance on privacy and the use of commercially available AI products (21 Oct 2024). https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products
ISO. ISO/IEC 27001:2022 Information security management systems. https://www.iso.org/standard/27001
ISO. ISO 9001:2015 Quality management systems. https://www.iso.org/standard/62085.html
Lewis P, Perez E, Piktus A, et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS 2020. https://proceedings.neurips.cc/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf
Huang L, Yu W, Ma W, et al. A Survey on Hallucination in Large Language Models. ACM Computing Surveys (2025). DOI: 10.1145/3703155
Zhang Y, et al. A Comprehensive Survey of Hallucination Mitigation in Large Language Models (2024). https://arxiv.org/html/2401.01313v1
Magesh V, Surdeanu M, et al. Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (2025). https://dho.stanford.edu/wp-content/uploads/Legal_RAG_Hallucinations.pdf
Idrees H, et al. A systematic review of knowledge management and new product development in high-tech companies (2023). ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2444569X2300046X
Durst S, et al. A systematic literature review on knowledge management in SMEs (2022). https://pmc.ncbi.nlm.nih.gov/articles/PMC9540134/
NIST. Artificial Intelligence Risk Management Framework (AI RMF 1.0) (2023). https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
Australian Government Department of Industry, Science and Resources. Australia’s AI Ethics Principles (7 Nov 2019). https://www.industry.gov.au/publications/australias-ai-ethics-principles





























