Part X · Building the Knowledgebase

Chapter 54. Maintaining and Updating the Knowledgebase

Addresses the long-term stewardship of community mapping knowledgebases: update cadence, crowd-sourced contributions, editorial review, dispute resolution, archiving, migration, funding, volunteer continuity, and sustainable maintenance planning.

5,200 words · 21 min read

Chapter 54: Maintaining and Updating the Knowledgebase


Chapter Overview

Building a knowledgebase is one thing. Keeping it current, accurate, and useful over years is another. This chapter addresses the long-term stewardship of community mapping knowledgebases: understanding data decay, establishing update cadence, enabling crowd-sourced contributions, implementing editorial review, resolving disputes, archiving without erasure, planning for migration, securing funding, preventing volunteer burnout, and designing sustainable maintenance systems. Maintenance is not an afterthought — it is the work that determines whether a knowledgebase becomes community infrastructure or digital landfill.


Learning Outcomes

By the end of this chapter, you will be able to:

  1. Explain the concept of data half-life and its implications for knowledgebase maintenance
  2. Design update cadence protocols appropriate to different content types
  3. Implement crowd-sourced update systems with quality controls
  4. Establish editorial review processes that balance speed and accuracy
  5. Apply dispute resolution protocols from established community knowledgebases
  6. Distinguish between deletion and archiving, and design archival systems
  7. Evaluate migration and forward-compatibility strategies
  8. Identify sustainable funding models for ongoing maintenance
  9. Recognize signs of volunteer burnout and design continuity plans

Key Terms

  • Data Half-Life: The time it takes for half of a dataset to become outdated, inaccurate, or obsolete.
  • Editorial Review: The process of validating, fact-checking, and approving changes before they are published.
  • Crowd-Sourced Updates: A system where community members contribute corrections, additions, or updates to the knowledgebase.
  • Archiving: Preserving historical data in a way that acknowledges it is no longer current but retains its value for understanding change over time.
  • Forward Compatibility: The design principle that future versions of a system can read and use data created in current or past versions.

54.1 The Half-Life of Community Data

All data decays. Not because files corrupt or servers fail, but because the world changes. The food bank listed in your knowledgebase closes. The community center changes its hours. The volunteer coordinator moves away. The service that was "free" now requires proof of residency. A map that was accurate last year becomes misleading this year.

The concept of data half-life — borrowed from physics and adapted to information science — offers a useful frame. Half-life is the time it takes for half of a dataset to become outdated. For community mapping data, half-life varies wildly by content type.

High-stability data has a long half-life. Geographic features like rivers, roads, and municipal boundaries change slowly. Organizational names and mission statements change infrequently. Historical data (what existed in 2010) does not change at all — though interpretations may shift. High-stability data might have a half-life of 5-10 years or more.

Medium-stability data has a moderate half-life. Service locations, operating hours, contact information, and staff names change with some regularity but not constantly. A community center might stay in the same place for a decade but change its hours twice a year. Medium-stability data might have a half-life of 1-3 years.

Low-stability data has a short half-life. Event calendars, availability of specific programs, waitlist status, current funding, and real-time service capacity change frequently. Low-stability data might have a half-life of weeks or months.

Understanding half-life helps prioritize maintenance effort. You cannot update everything constantly. But you can build a system that updates high-turnover content frequently, medium-turnover content regularly, and low-turnover content periodically — and that flags outdated entries for review rather than allowing them to silently mislead.

OpenStreetMap, the collaborative global map, has grappled with data decay for two decades. Its solution: a combination of crowd-sourced updates, automated staleness detection (flagging features not edited in X years), and volunteer "validator" roles who review suspicious edits. The system is not perfect — rural areas and less-populated regions often have stale data — but it is resilient. Wikipedia uses similar mechanisms: articles not updated in years get tagged with "this article may be outdated" notices, prompting editors to review.

The honest truth: most community knowledgebases die not from technical failure but from neglect. A knowledgebase launched with enthusiasm and funding becomes accurate for a year, then drifts, then becomes unreliable, then is abandoned. The median lifespan for unfunded community data projects is 3-5 years. Breaking past that threshold requires deliberate design for maintenance, not just creation.


54.2 Update Cadence

How often should a knowledgebase be updated? The answer is not "constantly" or "whenever we have time." It is: as often as the data requires, with clear protocols for each content type.

A sustainable update cadence matches effort to need. Low-stability content (event calendars, program availability) needs weekly or monthly updates. Medium-stability content (service hours, contact info) needs quarterly or biannual review. High-stability content (organizational mission, geographic boundaries) needs annual or as-needed review.

The cadence must also match capacity. A volunteer-run knowledgebase with 10 hours per month of maintenance time cannot promise daily updates. A well-funded municipal data team with paid staff can. Honest capacity assessment prevents over-promising and under-delivering.

Scheduled review cycles work well for medium- and high-stability data. Every quarter, a designated editor reviews a subset of entries: Are the phone numbers still working? Are the addresses current? Are the listed services still offered? This structured approach prevents the "we'll update it when we notice something's wrong" trap — because by the time someone notices, trust is already eroding.

Trigger-based updates work well for low-stability data. When an organization reports a change (new hours, new location, program closure), the update happens immediately. Trigger-based systems require clear intake channels: an email address, a web form, or a direct-edit interface where trusted contributors can submit changes.

Automated staleness flags work well for all content types. If an entry has not been reviewed or updated in 18 months, flag it for editorial review. The flag does not mean the data is wrong — it means it has not been verified recently. OpenStreetMap uses this approach: features not edited in 5+ years get flagged by volunteer validators who check whether they still exist.

Code for America's Brigade network — a collection of volunteer civic tech groups across the United States — maintains a national directory of Brigades. The directory uses a hybrid update model: Brigades self-report changes via a form (trigger-based), a coordinator reviews all entries quarterly (scheduled), and entries not updated in a year get flagged for outreach (automated staleness). The system is not perfect, but it has kept the directory usable for over a decade.


54.3 Crowd-Sourced Updates

The most sustainable knowledgebases are not maintained by a single person or team. They are maintained by the community they serve. Crowd-sourcing distributes the maintenance burden and taps into distributed knowledge — the people who use a service know when it changes.

But crowd-sourcing without quality control becomes chaos. Anyone can edit Wikipedia, but not every edit survives. The balance is: lower barriers to contribution, but maintain editorial oversight.

Who can contribute? The answer depends on the knowledgebase's trust model. Open systems (like OpenStreetMap) allow anyone to register and edit. Semi-open systems require email verification or a simple approval process. Closed systems restrict editing to verified community members or partner organizations. There is no single right answer — it depends on risk, capacity, and context. A knowledgebase of public services can be more open than a knowledgebase of sensitive cultural sites.

What can they change? Some systems allow direct edits that publish immediately (Wikipedia's default). Others require submission of suggested edits that await editorial review (the model many municipal data portals use). High-risk or high-stakes fields (addresses, contact info, descriptions of services) might require review. Low-risk fields (user-submitted tags, community notes) might publish immediately.

How do you motivate contribution? Intrinsic motivation (helping the community, seeing one's knowledge valued) is stronger than extrinsic rewards. But recognition matters. OpenStreetMap highlights top contributors. Wikipedia recognizes editors with "barnstars" and edit-count milestones. A community knowledgebase might publicly thank contributors in monthly updates, list them on an acknowledgments page, or invite active contributors to editorial roles.

How do you prevent vandalism or bad-faith edits? Moderation, version control, and community norms. Every edit should be logged (who changed what, when). Trusted users can be given "rollback" privileges to quickly undo vandalism. Suspicious patterns (the same user making dozens of deletions, IP addresses linked to spam) can trigger review or blocks. But over-policing kills participation. The goal is not zero bad edits — it is a system resilient enough to catch and correct them quickly.

LocalWiki, a platform for community-created local wikis, uses a crowd-sourced model with minimal barriers. Anyone can edit. All changes are logged and reversible. Spam and vandalism are rare because the platform is used by tight-knit communities where trust and accountability are high. In contrast, Google Maps user-submitted edits go through automated and manual review because the scale and stakes (billions of users, navigation-critical data) demand it.


54.4 Editorial Review

Editorial review is the process of validating, fact-checking, and approving changes before (or shortly after) they are published. It is the quality-control layer that keeps a knowledgebase credible.

Pre-publication review means changes are submitted, reviewed, and then published. This is the safest model but the slowest. It works well for high-stakes data (legal information, health services, addresses) where errors are costly. The downside: if the editorial queue gets backed up, contributors get frustrated and stop participating.

Post-publication review means changes publish immediately but are flagged for review. This is faster and encourages participation, but it means errors can go live. It works well for low-stakes data or systems with active moderators who review recent changes daily. Wikipedia uses post-publication review for most articles; vandalism is usually caught within minutes by volunteer editors monitoring recent changes.

Tiered review means different content types or user roles get different review processes. Trusted contributors (those with a track record of good edits) can publish directly. New contributors submit changes for review. High-risk fields always require review; low-risk fields publish immediately. This balances speed, quality, and trust.

What does an editor check? At minimum: accuracy (is this information correct?), completeness (are required fields filled?), formatting (does it follow style guidelines?), appropriateness (is this relevant content, or spam?), and ethics (does this entry respect privacy, consent, and cultural protocols?). A good editorial checklist makes review faster and more consistent.

How long should review take? Aim for 24-48 hours for routine changes, faster for urgent corrections (a service has closed, contact info is wrong). If review takes weeks, contributors lose trust and stop participating.

The Humanitarian OpenStreetMap Team (HOT) trains volunteer mappers to respond to disasters by mapping affected areas. Edits are reviewed by experienced validators who check for accuracy (buildings traced correctly, roads connected properly) and completeness (all visible features mapped). The review process is fast — often within hours — because lives depend on accurate maps. The system works because validators are trained, the review criteria are clear, and the community culture values both speed and quality.


54.5 Handling Disputes

Community knowledgebases surface factual disagreements. Two residents disagree on the historical name of a park. Two organizations claim to serve the same area and want their boundaries recognized differently. A service provider disputes a community member's characterization of their accessibility.

Disputes are not failures. They are signals that the knowledgebase matters — people care enough to argue. But unresolved disputes erode trust. A system for handling them is essential.

Step 1: Acknowledge the dispute. Do not delete one perspective and keep the other. Flag the entry as "disputed" and document both perspectives. Wikipedia does this with "citation needed" tags and talk-page discussions. A community knowledgebase might add a "Multiple perspectives" note and link to the discussion.

Step 2: Seek evidence. Can the dispute be resolved with documentation? Historical maps, city records, organizational bylaws, or interviews with long-time residents can clarify facts. Sometimes disputes are not about facts but about interpretation — and that is fine. Name it.

Step 3: Mediate with community norms. Many disputes are about whose knowledge counts. If the knowledgebase has a governance structure (Chapter 35.8 addresses this), bring the dispute to that body. If not, convene a small mediation group that includes the disputing parties and neutral community members. The goal is not to "win" but to find a resolution both parties can live with.

Step 4: Document the resolution. If the dispute is resolved, update the entry and note the resolution in the edit history. If it is not resolved, document both perspectives and explain why. Transparency builds trust.

Step 5: Learn from the pattern. If disputes cluster around certain content types (boundaries, historical names, service descriptions), that is a signal. Maybe the knowledgebase needs clearer sourcing guidelines. Maybe it needs a separate "community perspectives" layer that holds subjective knowledge differently than factual data.

OpenStreetMap has a well-documented dispute resolution process. If two mappers disagree on whether a road is public or private, they discuss on the feature's talk page. If they cannot resolve it, they bring it to the regional mailing list. If still unresolved, a volunteer Dispute Resolution Team mediates. The process is slow but trusted because it prioritizes consensus over authority.

As Chapter 35.8 (Conflict, Disagreement, and Repair) emphasized, conflict in community work is normal. The question is not whether disputes will arise — it is whether the system has a fair, transparent process for resolving them.


54.6 Archiving and Sunsetting Entries

What happens when a service closes? When an organization dissolves? When a community space is demolished? The instinct is often: delete the entry. But deletion erases history. Archiving preserves it.

Archiving means marking an entry as "no longer current" but keeping it in the knowledgebase with clear historical context. A food bank that closed in 2022 is archived with a note: "Operated 2010-2022. Closed due to funding loss. Service users were redirected to [other food bank]." The entry no longer appears in the default "current services" view, but it remains searchable in the historical archive.

Why archive instead of delete? Because a community map of 2010 has value even when the territory has changed. Researchers studying service availability over time need that data. Community members processing displacement or gentrification need to be able to point to what was lost. Archiving is an act of memory and accountability.

Sunsetting is the intentional process of moving an entry from "current" to "archived." It includes: verifying the closure (not just a temporary pause), documenting the reason (if known and appropriate to share), noting where users were redirected (if anywhere), updating any references in other entries, and preserving the full edit history.

Historical layers are a powerful feature in digital knowledgebases. Instead of a single "now" map, offer multiple temporal views: "Services as of 2020," "Services as of 2015," "Services as of 2010." This turns the knowledgebase into a tool for understanding change, not just current state. OpenStreetMap has experimental time-slider features that show how mapped features have changed over the years. The Internet Archive's Wayback Machine does this for websites.

Avoid silent deletion. If an entry is removed, log why. If a user searches for a service that no longer exists, show them the archived entry with context — do not return "no results found" and leave them wondering if the service never existed or if the knowledgebase is incomplete.


54.7 Migration and Forward Compatibility

Technology changes. File formats evolve. Platforms get deprecated. A knowledgebase built in 2024 may need to be migrated to new infrastructure by 2030. Forward compatibility is the design principle that future versions of a system can read and use data created in current or past versions.

Use open, well-documented formats. Proprietary formats (locked to a specific vendor's software) are migration nightmares. Open formats (JSON, GeoJSON, CSV, Markdown) are readable by many tools and future-proof. When choosing a knowledgebase platform, ask: Can I export all my data in an open format? If the answer is no, that is a red flag.

Design for export and import. A knowledgebase should be able to export its full dataset (entries, metadata, edit history, relationships) at any time. It should also be able to import structured data from other sources. Portability is power. It prevents vendor lock-in and enables migration when needed.

Version your schema. The data structure (what fields exist, how they relate) will evolve. Document the schema version in every export. When you make breaking changes (renaming a field, changing data types), increment the version number and provide a migration script that converts old data to the new format.

Preserve provenance. When migrating data, do not lose the metadata: who created this entry, when, from what source, with what edits. Provenance is part of the data's credibility. A knowledgebase that loses its edit history in migration loses accountability.

Plan for the long term. What happens if the platform you are using shuts down? If the hosting service disappears? If the lead developer stops maintaining the code? A resilient knowledgebase has a succession plan: backups stored in multiple locations, documentation for how to rebuild the system, and a governance structure that outlives any single person or organization.

The Internet Archive's Wayback Machine has archived over 800 billion web pages since 1996. It has survived multiple technology shifts (from magnetic tapes to modern cloud storage) by obsessively prioritizing open formats, redundant storage, and migration planning. Community knowledgebases do not need that scale, but they can learn from the principle: design for longevity, not just launch.


54.8 Funding Maintenance

Building a knowledgebase is a one-time cost. Maintaining it is a recurring cost. Funders love launches. They are less enthusiastic about "we need $30,000 a year to keep the data current." Yet without funding for maintenance, the knowledgebase decays and dies.

Maintenance costs include:

  • Staff or contractor time for editorial review, update coordination, and dispute resolution
  • Hosting and infrastructure (servers, domain names, backups)
  • Software maintenance (security patches, bug fixes, feature updates)
  • Community engagement (training contributors, responding to user questions)
  • Data validation (periodic audits, quality checks)

For a small-scale local knowledgebase, this might be $10,000-$30,000 per year. For a regional or multi-jurisdictional system, it could be $100,000-$500,000 per year. For a national system, much more.

Funding models for maintenance:

1. Municipal or government funding. If the knowledgebase serves a public need (like a community services directory), municipal government may fund it as infrastructure. This is the most stable model but requires political will and budget prioritization.

2. Institutional partnership. A university, library, or nonprofit hosts and maintains the knowledgebase as part of its mission. This works if the institution has long-term commitment and sees the knowledgebase as aligned with its mandate.

3. Consortium or collective funding. Multiple organizations (nonprofits, funders, government agencies) contribute to a shared maintenance fund. This distributes the burden and reduces dependency on any single funder. Chapter 35.7 (Funder Relationships) discusses coalition funding models.

4. Earned revenue. Some knowledgebases charge for premium features (API access, custom reports, white-label versions) while keeping the core public data free. This works if there is a market willing to pay.

5. Volunteer labor. OpenStreetMap and Wikipedia are maintained almost entirely by volunteers. This works at massive scale because of network effects (millions of users, tens of thousands of active contributors). For a local knowledgebase, relying entirely on volunteers is risky — burnout is inevitable without paid coordination.

The hard truth: most community data projects die of funding neglect within 3-5 years. Breaking past that threshold requires securing multi-year maintenance funding before launch — not as an afterthought. Funders who care about impact should fund maintenance, not just creation. As Chapter 35.9 (Sunset Clauses) argued, if maintenance funding cannot be secured, an honest sunset plan is better than slow decay.


54.9 Volunteer Burnout and Continuity

Many community knowledgebases start with passionate volunteers. A small group commits to building and maintaining the system. For a while, it works. Then life happens. Jobs change. People move. Volunteers burn out. The knowledgebase stalls.

Burnout is not a personal failure. It is a structural issue. Volunteer-dependent systems must be designed to prevent burnout and ensure continuity.

Signs of volunteer burnout:

  • Key volunteers missing deadlines or going silent
  • Maintenance backlog growing (pending edits, unanswered questions, unflagged outdated entries)
  • Declining participation in community calls or meetings
  • Volunteers expressing frustration, exhaustion, or feeling unappreciated
  • Turnover: volunteers leaving faster than new ones join

Preventing burnout:

1. Distribute responsibility. Do not rely on a single person to do all the editorial review, all the outreach, or all the technical maintenance. Spread the load across multiple people. Document processes so anyone can step in.

2. Set realistic expectations. A volunteer with 5 hours per month cannot maintain a knowledgebase that needs 20 hours per week. Match commitments to capacity. It is better to update less frequently but sustainably than to promise daily updates and burn out in six months.

3. Recognize and appreciate contributions. Public thanks, acknowledgment pages, contributor spotlights, and simple "thank you" messages matter. Volunteers stay engaged when they feel valued.

4. Build in rest and rotation. No one should be expected to do the same maintenance task indefinitely. Rotate roles. Build in sabbaticals. Encourage stepping back without guilt.

5. Lower barriers to onboarding. If it takes weeks to learn how to contribute, new volunteers will not join. Clear documentation, simple workflows, and mentorship (pairing new volunteers with experienced ones) make onboarding easier.

Continuity planning:

What happens if the lead coordinator leaves? If the technical admin is hit by a bus? A resilient knowledgebase has a succession plan:

  • Documented processes (how to do editorial review, how to update entries, how to manage disputes)
  • Shared access (passwords, admin accounts, server access stored securely and known to multiple people)
  • Governance structure (a board, steering committee, or community council that outlasts any individual)
  • Recruitment pipeline (actively inviting new contributors, not waiting until someone leaves to find a replacement)

Code for America's Brigade network has grappled with volunteer continuity for over a decade. Some Brigades thrive for years. Others collapse when a key leader leaves. The difference is almost always: did they distribute leadership, document processes, and build a culture of shared ownership? Or did they depend on a single charismatic founder?


54.10 Synthesis and Implications

This chapter has argued that maintenance is not an afterthought — it is the work that determines whether a knowledgebase becomes community infrastructure or digital landfill.

The core insights:

  1. All community data decays. The half-life varies by content type, but no dataset stays current forever. Sustainable knowledgebases are designed for continuous updating, not one-time creation.

  2. Update cadence must match both data stability and organizational capacity. Low-stability data needs frequent updates. High-stability data needs periodic review. Promising more than you can deliver erodes trust.

  3. Crowd-sourcing distributes the maintenance burden — but requires editorial oversight. Open contribution without quality control becomes chaos. The balance is: lower barriers to contribution, maintain editorial review.

  4. Disputes are inevitable and valuable. A system for acknowledging, mediating, and resolving factual disagreements builds trust. Deleting one perspective and keeping another does not.

  5. Archiving is not deletion. Historical data has value. Entries that are no longer current should be preserved with context, not erased.

  6. Forward compatibility and migration planning are essential. Technology changes. A knowledgebase that cannot be exported, migrated, or rebuilt is fragile.

  7. Maintenance requires funding. Most community data projects die of funding neglect within 3-5 years. Breaking past that threshold requires securing multi-year maintenance funding before launch.

  8. Volunteer burnout is a structural issue, not a personal failure. Distributed responsibility, realistic expectations, recognition, rest, and continuity planning prevent burnout and ensure longevity.

The implications for practice are clear. If you are building a community mapping knowledgebase, design for maintenance from day one. Budget for it. Staff for it. Governance for it. Plan for migration, disputes, archiving, and succession. Ask the hard question: If we cannot secure maintenance funding, should we build this at all — or should we invest in strengthening an existing system instead?

Maintenance is the unglamorous work. But it is the work that makes community knowledge infrastructure real.


54.11 Maintenance Plan Workshop

Purpose: This exercise helps students, practitioners, or community teams design a realistic, sustainable maintenance plan for a community mapping knowledgebase.

Materials Needed:

  • The knowledgebase schema and content plan (from Chapters 50-51)
  • Access to sample maintenance budgets or staffing models (real examples from existing projects, or templates provided by instructor)
  • Whiteboard or collaborative document for planning

Steps:

  1. Identify content types and half-lives.
    List the major content types in your knowledgebase (e.g., service locations, contact info, event calendars, historical narratives). For each, estimate its data half-life (how long until half the entries are outdated). Label each as high-, medium-, or low-stability.

  2. Design update cadence.
    For each content type, decide: How often will this be reviewed or updated? Who will do it? What triggers an update (scheduled review, user submission, automated flag)? Be specific.

  3. Define contribution and review processes.
    Who can contribute updates? What is the process for submitting changes? What fields require editorial review before publishing? What is the target review time? Draft a simple editorial checklist.

  4. Plan for disputes.
    What is the process if two contributors disagree on a fact? Who mediates? How is the dispute documented and resolved? Write a 3-5 step dispute resolution protocol.

  5. Design archiving and migration.
    What happens when an entry becomes outdated? How is it archived? What metadata is preserved? What export formats will you support? How will you handle schema changes over time?

  6. Estimate maintenance costs and secure funding.
    Calculate annual maintenance costs: staff/contractor time, hosting, software, community engagement. At current rates, what does this cost per year? Identify three potential funding sources. Draft a one-page funding pitch for maintenance (not launch).

  7. Build a continuity plan.
    Who are the key people maintaining the knowledgebase? What happens if they leave? Document the top 5 processes that must be handed off. Identify where documentation, access credentials, and institutional knowledge currently live.

Deliverable: A 3-5 page maintenance plan covering update cadence, contribution/review processes, dispute resolution, archiving, migration, funding, and continuity. Include a 12-month maintenance calendar showing scheduled reviews and updates.

Time Estimate: 2-3 hours (can be done in a workshop setting or as a take-home assignment)

Safety and Ethics Notes: Be honest about capacity. A plan that requires 40 hours per week of volunteer labor is not sustainable. A plan that depends on a single person is fragile. Design for the resources you actually have, not the resources you wish you had.


Key Takeaways

  • All community data decays. Sustainable knowledgebases are designed for continuous updating, not one-time creation.
  • Update cadence must match both data stability (how fast it changes) and organizational capacity (how much maintenance effort is available).
  • Crowd-sourcing distributes the maintenance burden, but requires editorial oversight to maintain quality and credibility.
  • Disputes are inevitable in community knowledgebases. A fair, transparent dispute resolution process builds trust.
  • Archiving preserves historical value; deletion erases memory. Entries that are no longer current should be archived with context, not deleted.
  • Maintenance requires funding. Most community data projects die of neglect within 3-5 years without sustained financial support.

Recommended Further Reading

Foundational:

  • Suggested: Research on data stewardship, digital preservation, and the long-term sustainability of community information systems.

Academic Research:

  • Suggested: Studies on volunteer retention in civic tech projects, the economics of open data maintenance, and the sociology of collaborative knowledge production (Wikipedia, OpenStreetMap).

Practical Guides:

  • OpenStreetMap's validator guidelines and staleness-flagging protocols (real, well-documented)
  • Wikipedia's dispute resolution processes and edit-review workflows (real, extensively documented)
  • Code for America's Brigade sustainability playbooks (real, available on their website)

Case Studies:

  • Suggested: Case studies of community knowledgebases that survived 10+ years and those that collapsed within 3-5 years — comparative analysis of what made the difference.

Plain-Language Summary

Building a community knowledgebase is exciting. Maintaining it is hard. Phone numbers change. Services close. People move. Without regular updates, the knowledgebase becomes outdated and useless.

This chapter is about how to keep a knowledgebase current and trustworthy over years. It covers how often to update different types of information, how to let community members help with updates (while still checking for accuracy), how to handle disagreements about facts, and how to preserve old information without deleting it.

It also talks about the biggest challenge: funding. Most community data projects run out of money within 3-5 years and shut down. To avoid that, you need a plan for paying the ongoing costs of keeping the knowledgebase updated — before you launch, not after.

The key lesson: maintenance is not an afterthought. It is the work that determines whether a knowledgebase becomes something the community depends on or just another abandoned website.


End of Chapter 54.