This is Part 2 of our content freshness series. Part 1 covers why freshness matters and what it actually means. This post picks up where it left off: why expiry dates alone aren't enough, and what continuous monitoring looks like.
Let's say you do the responsible thing. Every document in your wiki gets a review date. Six months from creation, maybe twelve for stable reference material. When the date arrives, the owner gets a notification: review this or it gets flagged.
That's better than what most companies do. Most companies do nothing. The doc sits there, slowly decaying, and nobody notices until someone follows the instructions and something breaks.
But here's the uncomfortable truth: expiry dates are necessary and completely insufficient. A document can go dangerously stale days after its last review, and a review date won't catch it.
What expiry dates actually solve
Expiry dates solve the accountability problem. They answer the question: "Who is responsible for confirming this is still accurate, and when?"
That's genuinely valuable. Without it, documentation enters what we call the ownership void — a state where everyone assumes someone else is maintaining it, so nobody does. Setting a review date assigns a single person a single obligation on a specific date. Simple. Clear. Effective.
Here's what expiry dates look like in practice:
- A document is created with a review date 90 days out
- 14 days before expiry, the owner gets notified
- On the expiry date, the document is flagged as "needs review"
- The owner reviews, confirms it's still accurate, and extends the date
- Or they update it, or reassign it, or archive it
This is a solid system. It catches the slow decay — the doc that nobody has thought about in a year. It creates a regular cadence of review. It makes ownership visible.
But it has a blind spot the size of a continent.
What expiry dates miss
Between review dates, a document lives in a black box. You reviewed it on January 15. The next review is April 15. On February 3, any of these things could happen:
Links break silently
An external URL you referenced returns a 404. An internal link points to a document that was archived. A code repository was renamed and every GitHub link in your doc is now dead. Your document still looks fine. The expiry date isn't for another two months. Nobody knows the links are broken.
Related content changes
You wrote a deployment guide that references your architecture document. In February, someone completely rewrites the architecture doc — new patterns, new infrastructure, new conventions. Your deployment guide still references the old architecture. It's not technically wrong yet, but it's drifting. By the time your review date arrives, the gap might be significant.
Readership drops to zero
Your document used to be read by 40 people a month. Then a process changed and nobody needs it anymore, but nobody archived it either. It sits in search results, taking up space, occasionally confusing a new hire who doesn't know it's irrelevant. The expiry date doesn't care about readership. It'll ping the owner on schedule regardless.
Translations fall behind
The English source was updated on February 10. The French, German, and Japanese translations are now stale. But the expiry date on those translated versions isn't until May. For three months, non-English teams are reading outdated content and don't know it.
Readers flag problems
A reader leaves a comment: "Step 3 doesn't work anymore, the CLI flag was deprecated." That comment sits there. The expiry date is still weeks away. The next person who reads the doc might not see the comment. The one after that definitely won't.
Expiry is a scheduled checkpoint. These are unscheduled events. The gap between the two is where stale documentation does the most damage.
Freshness: continuous monitoring
Freshness scoring fills the gap that expiry dates leave open. Instead of checking a document's health once every 90 days, freshness tracks it continuously — every day, in the background, without anyone needing to do anything.
Here's how it works in Rasepi:
Every document gets a live freshness score from 0 to 100, calculated from multiple signals:
| Signal | What it detects | Why it matters |
|---|---|---|
| Link health | Broken, redirected, or unreachable URLs | Broken links erode trust and waste time |
| Review status | Whether the doc has been reviewed on schedule | The baseline accountability check |
| Readership trends | Whether anyone is actually reading this | Low readership suggests the doc may be irrelevant |
| Edit recency | When the doc was last modified vs. related content | Detects drift relative to the surrounding knowledge base |
| Translation alignment | Whether all language versions are current | Stale translations mean teams in other markets work from old info |
| Reader flags | Whether readers have reported issues | Crowdsourced staleness detection |
| Cross-references | Whether documents this one links to are themselves stale | Staleness is contagious |
Each signal contributes to the overall score. A document can lose freshness points for a broken link today, even though its review date isn't for weeks. That's the whole point.
How the two work together
Expiry and freshness aren't competing approaches. They're complementary layers:
Expiry dates are the governance layer. They create a regular cadence of human review. Someone has to look at this document on a schedule and confirm it's still accurate. This catches the things automation can't — whether the content is still correct, whether the advice is still sound, whether the process it describes still reflects reality.
Freshness scoring is the monitoring layer. It catches everything between review dates — the broken links, the translation drift, the abandoned documents, the contextual decay that happens when the world moves and a document doesn't.
Together they create a system where:
- Every document is reviewed by a human on a regular schedule (expiry)
- Between reviews, automated signals catch problems as they happen (freshness)
- Both systems feed into a single trust score that everyone can see
- That score affects how the document ranks in search and whether AI tools use it as a source
The scoring impact
Here's where it gets practical. In Rasepi, a document's freshness score directly affects its visibility:
- Score 80–100: Full visibility. Appears normally in search results. Eligible as a source for AI answers. No flags.
- Score 50–79: Reduced visibility. Appears in search with a staleness indicator. AI tools may deprioritise it as a source. Owner is notified.
- Score below 50: Flagged. Pushed down in search results significantly. Excluded from AI answers entirely. Owner receives urgent notification.
This creates a feedback loop. When a document's score drops, the owner is pushed to fix it — not because an arbitrary date arrived, but because something actually changed. The broken link, the stale translation, the declining readership — these are real signals that demand attention now, not in six weeks.
A practical example
Let's walk through a scenario:
March 1: Your "Incident Response Playbook" scores 92. It was reviewed two weeks ago, all links are valid, readership is high, and all four language versions are current.
March 8: Someone restructures the engineering status page. Three URLs in the playbook now redirect. Freshness score drops to 78. The owner gets a notification: "3 broken links detected."
March 10: The owner fixes the links. Score rebounds to 89.
March 15: The English version is updated with a new escalation path. The French and German translations are now stale (content hash mismatch). Score drops to 74.
March 17: The translations are updated. Score returns to 91.
March 20: Readership data shows the Japanese version hasn't been accessed in 30 days. Score dips to 86. A subtle signal, but tracked.
April 1: The scheduled review date arrives. The owner reviews the content, confirms it's accurate, extends the expiry to July 1. Score stays at 86 because the readership signal is still present.
At no point did the team wait for a review date to catch a problem. The freshness system flagged issues within days. The review date provided the governance checkpoint. Both layers doing their job.
Why "just set a review date" isn't enough anymore
Five years ago, expiry dates might have been sufficient. Documentation was read by people, and people can exercise judgement. If a doc looked a bit off, they'd ask around.
Today, documentation is infrastructure. It feeds AI tools, onboarding automation, compliance systems, and search engines that serve results without context. These systems don't exercise judgement. They consume content as-is and redistribute it at scale.
A document with broken links and stale translations that still has three weeks until its review date can do a lot of damage in those three weeks — especially if an AI assistant is confidently serving answers based on it.
Expiry dates are the minimum viable approach to documentation governance. Freshness scoring is what you need when documentation is consumed by systems that can't think for themselves.
Getting started
If you already have expiry dates on your documents (good for you — seriously, most teams don't even do that), here's how to layer on freshness:
- Start tracking links. Run a broken link check across your top 50 documents. The number will probably surprise you.
- Check translation alignment. If you have multilingual docs, compare last-edit dates between the source and translations. How many are more than a month behind?
- Look at readership. Which documents get zero traffic? Are they still needed, or should they be archived?
- Talk to your AI team. If you have an internal AI assistant, ask what documents it's sourcing from. Then check the freshness of those documents.
You'll likely find that your technically-not-expired documents have plenty of problems that expiry dates will never catch.
Expiry dates tell you if someone has checked a document recently. Freshness tells you if the document is actually healthy right now. One is a calendar event. The other is a living signal.
You need both. But if you only have expiry dates, you're flying blind between checkpoints.
A document doesn't go stale on its review date. It goes stale the moment something changes and nobody notices. Freshness scoring notices.
Rasepi combines mandatory expiry dates with continuous freshness monitoring. Every document earns its trust score — or loses it — in real time. No waiting, no blind spots, no surprises at review time.
See how freshness scoring works →
This is Part 2 of a two-part series. If you haven't read it yet, start with Part 1: The Metric Your Team Isn't Tracking.