Part VI · Ethics, Governance, and Power

Chapter 32. Privacy, Consent, and Sensitive Data

Examines privacy as a community value, techniques for protecting personal data in maps, layered consent frameworks, sensitive location protocols, and privacy law requirements in community mapping practice.

6,247 words · 25 min read

Chapter 32: Privacy, Consent, and Sensitive Data


Chapter Overview

This chapter examines privacy as a foundational community value in mapping practice. It introduces the privacy risks inherent in spatial data, technical approaches like geomasking and k-anonymity, layered consent frameworks, protocols for sensitive locations, and privacy law requirements. Privacy in Community Mapping is not compliance theatre — it is a commitment to dignity, safety, and the principle that communities retain authority over their own information.


Learning Outcomes

By the end of this chapter, you will be able to:

  1. Explain why privacy is a core value in Community Mapping, not an obstacle
  2. Identify what constitutes personal identifiable information (PII) in spatial data
  3. Apply geomasking and k-anonymity techniques to protect individual privacy
  4. Design layered consent processes for different levels of data sharing
  5. Recognize sensitive locations that require special protection
  6. Articulate data minimization principles and their application to mapping projects
  7. Evaluate privacy law frameworks and their implications for practice

Key Terms

  • Personal Identifiable Information (PII): Information that can be used to identify a specific individual, including name, address, health records, or precise geolocation.
  • Geomasking: The practice of perturbing or aggregating geographic locations to protect privacy while preserving spatial patterns.
  • K-Anonymity: A privacy protection standard ensuring any individual record cannot be distinguished from at least k-1 other records.
  • Layered Consent: A consent framework offering participants different levels of data sharing, from fully anonymous to identifiable.
  • Sensitive Location: A place that, if publicly mapped at high precision, could cause harm, stigma, or safety risks to individuals who use it.

32.1 Privacy as a Community Value

Privacy is not secrecy. Privacy is the right to control who knows what about you, when, and for what purpose. In Community Mapping, privacy means that individuals and communities retain authority over their own information — that they are not mapped against their will, that their data is not used for purposes they did not agree to, and that they can withdraw participation if they change their minds.

Privacy is a community value, not just an individual right. When one person's location is exposed without consent, it can harm their family, their neighbours, and the trust that holds the community together. When a map of a vulnerable population is released without safeguards, it can invite surveillance, policing, or predatory behaviour. When Indigenous communities' cultural sites are mapped publicly without permission, it can enable exploitation, desecration, or theft. Privacy protects not only individuals, but the social fabric of community life.

Privacy is also a precondition for honest participation. If people do not trust that their information will be protected, they will not share it. If residents fear that disclosing their struggles will bring enforcement or stigma, they will remain silent. If service users worry that their data will be handed to immigration authorities or landlords, they will avoid services altogether. Privacy enables the trust required for participatory mapping.

In Community Mapping, privacy intersects with power. Those with less power — migrants, people experiencing homelessness, people with precarious legal status, survivors of violence, people using criminalized services — face greater risk when their information is exposed. Privacy protections are not equally distributed: wealthy people can afford lawyers, security, and anonymity; marginalized people often cannot. Community Mapping that fails to protect privacy reinforces existing inequities and exposes the most vulnerable to further harm.

Privacy is not an obstacle to good mapping. It is a design principle. Maps can be useful, accurate, and actionable without compromising individual privacy. The challenge is to adopt techniques that preserve spatial patterns and insights while protecting identifiable information. This chapter provides those techniques.

Some mappers resist privacy protections, arguing that "the data is already public" or "we need precision to be accurate." But public data is not the same as consented data. A person's home address may be in a property database, but that does not mean they consented to have it marked on a public vulnerability map. Precision is not the same as usefulness: a dot showing the exact location of a domestic violence shelter is precise but dangerous; a symbol showing "shelter services available in this postal code" is less precise but safer and still useful for planning.

Privacy as a community value means that when in doubt, protect. When the risk is unclear, ask. When consent is ambiguous, do not proceed. The burden of proof is on the mapper to demonstrate that sharing data will cause no harm — not on the community to prove that harm might occur.


32.2 Personal Identifiable Information in Maps

Personal identifiable information (PII) is any information that can be used to identify a specific individual. In conventional datasets, PII includes obvious identifiers: name, address, phone number, email, health card number, social insurance number. But in spatial data, identification can occur through indirect means — often in ways mappers do not anticipate.

A precise point on a map showing where someone lives is PII. Even if no name is attached, combining the location with other publicly available data (property records, social media check-ins, street view imagery) can reveal identity. This is called re-identification risk: the possibility that anonymized data can be linked back to individuals through triangulation with other datasets.

Re-identification risk is not hypothetical. Researchers have repeatedly demonstrated that anonymized location data can be re-identified with startling accuracy. In one study, knowing just four location-time pairs (home, work, and two other regular stops) was sufficient to uniquely identify 95% of individuals in a mobility dataset. In another, researchers re-identified patients in a public health dataset by matching anonymized hospital locations with news reports of ambulance arrivals.

In Community Mapping, PII can arise in multiple ways:

  • Home locations. Mapping where individuals live, especially in low-density areas where households are spatially distinct.
  • Service use locations. Mapping where individuals access health services, social assistance, addiction treatment, or mental health support.
  • Movement patterns. Mapping routes, travel times, or regular stops that reveal identity through routine.
  • Attribute combinations. Mapping clusters of demographic attributes (age, gender, language, immigration status) that, combined with location, narrow down to identifiable individuals.
  • Small area counts. Reporting counts for very small geographies (single street, single building) where low numbers make individuals identifiable.
  • Photographs. Including images that show identifiable people, vehicles, or property without consent.

Even aggregate data can reveal PII. A map showing "3 seniors with mobility challenges on this block" is not anonymous if the block has only 10 households — neighbours will know who is being counted. A map showing "2 youth accessing mental health services" in a small school is identifiable. Aggregation protects privacy only when the group is large enough and diverse enough to provide anonymity.

Community Mapping practitioners must develop a habit of PII scanning before releasing any map. Ask: Could this dataset, combined with other information, reveal who someone is? Could it reveal where they live, where they go, or what services they use? Could it expose them to harm, stigma, or unwanted contact? If the answer is yes, the data requires protection.


32.3 Geomasking and K-Anonymity

Privacy-preserving spatial techniques allow mappers to present useful patterns without exposing individuals. Two foundational techniques are geomasking and k-anonymity.

Geomasking is the practice of perturbing or obscuring precise geographic locations to prevent re-identification. Common geomasking methods include:

  • Random perturbation: Shifting each point a random distance and direction. The shift distance is typically set large enough to prevent re-identification but small enough to preserve spatial patterns.
  • Aggregation to boundaries: Reporting data at a coarser geographic level (neighbourhood, postal code, census tract) rather than individual addresses or parcels.
  • Grid masking: Snapping points to the centre of a grid cell (e.g., 500m × 500m squares) so that the exact location is obscured but the general area is represented.
  • Donut masking: For sensitive locations like shelters or safe houses, suppressing data within a defined radius (e.g., 1 km) while still showing the presence of the service at a regional level.

The choice of geomasking method depends on the context. For public health data showing disease incidence, random perturbation of 200-500 meters may preserve spatial patterns while protecting individual addresses. For domestic violence service locations, donut masking or full suppression of precise locations is necessary. For Indigenous cultural sites, the community may decide not to map at all, or to represent sites only in internal-use maps governed by community protocols.

K-anonymity is a formal privacy standard requiring that any individual record in a dataset cannot be distinguished from at least k-1 other records. For example, if k=5, then any person in the dataset must be indistinguishable from at least 4 others based on the available attributes.

In spatial data, k-anonymity typically means that:

  • No location contains fewer than k individuals.
  • No combination of location and attributes (age, gender, service type) identifies fewer than k individuals.

If a map cell contains only 2 individuals, and k=5, those records must be suppressed or aggregated with adjacent cells until the count meets the threshold. If a neighbourhood has 8 seniors receiving home care, but only 1 is Indigenous, reporting "1 Indigenous senior" violates k-anonymity for k≥5. The data would need to be suppressed or aggregated to a larger area where the count is higher.

K-anonymity has limitations. It protects against re-identification through attribute matching but does not protect against all inference attacks. It can result in significant data suppression, reducing the utility of the map. And it does not address the risk of sensitive attribute disclosure — for example, even if you cannot identify which of 10 people accessed addiction services, the fact that someone in that group did may itself be harmful.

Despite these limits, k-anonymity provides a practical, defensible standard. Many public health agencies, government statistical offices, and privacy regulators use k-anonymity thresholds (commonly k=5 or k=10) as a baseline requirement for releasing small-area data.

In practice, geomasking and k-anonymity are often used together: perturb locations to prevent address-level identification, and suppress or aggregate counts to ensure group anonymity. The result is a map that shows useful spatial patterns without exposing individuals.


32.4 Layered Consent

Informed consent is the ethical and legal requirement that individuals understand what they are agreeing to and voluntarily choose to participate. But consent in Community Mapping is more complex than a single yes-or-no question. Different participants may be comfortable with different levels of sharing. A layered consent framework offers multiple options, allowing people to choose how their information is used.

A layered consent model might offer:

Level 1: Fully anonymous contribution.
Data is aggregated or anonymized before analysis. No identifiable information is retained. The participant's contribution is included in counts, patterns, and summary statistics, but cannot be traced back to them. This is appropriate for sensitive topics, vulnerable populations, or when participants have privacy concerns.

Level 2: Pseudonymous participation.
Data is linked to a pseudonym or participant code, not a real name. This allows for longitudinal tracking (seeing the same participant over time) and validation without identifying individuals. Researchers hold a secure key linking codes to identities, but public outputs contain no identifiable information.

Level 3: Named participation, internal use only.
Participant names and identifiable information are retained for internal project purposes (contact, follow-up, validation) but are not published. Maps and reports shared publicly use anonymized or aggregated data.

Level 4: Named participation with public attribution.
Participants agree to be publicly identified. Their names, stories, or locations appear in public outputs. This is appropriate when participants want recognition, are advocating for change, or when naming is central to the project (e.g., memorial mapping, community storytelling).

Layered consent respects autonomy. Not everyone wants the same level of privacy. Some participants may want their voices heard and their names known; others may want to contribute but remain invisible. A one-size-fits-all approach fails both groups.

Layered consent also allows projects to include people who would otherwise be excluded. If the only option is full public identification, many vulnerable people will not participate. If the only option is total anonymity, advocates and storytellers cannot be recognized. Offering a menu of options increases inclusivity.

Implementing layered consent requires:

  • Clear communication. Consent forms and conversations must explain what each level means, what data will be shared, who will see it, and how long it will be retained.
  • Separate data streams. Data collected under different consent levels must be stored and processed separately to prevent accidental disclosure.
  • Revocable consent. Participants must be able to change their consent level or withdraw entirely, and the project must have systems to honour those changes.

Consent is not a one-time event. It is an ongoing relationship. As projects evolve, new uses of data may emerge. Before using data for purposes not originally consented to, mappers must return to participants and seek renewed consent.


32.5 Withdrawal and Right to Be Forgotten

Privacy rights include the right to withdraw consent and, in some jurisdictions, the right to be forgotten — the right to have one's data deleted from systems and no longer processed.

In Community Mapping, withdrawal can be complicated. If a participant withdraws, what happens to their data? If it has already been aggregated into summary statistics, it may be technically impossible to remove their specific contribution without re-running the entire analysis. If their story has been published in a report, it cannot be unpublished — but it can be redacted from future editions, and the project can stop using it in new outputs.

Best practice for withdrawal includes:

Clear withdrawal procedures.
Consent forms should explain how to withdraw, what will happen to data upon withdrawal, and what cannot be undone. Provide multiple ways to withdraw (email, phone, in-person).

Data removal from active systems.
Upon withdrawal, the participant's identifiable data is deleted from active project databases, contact lists, and analysis files. Pseudonymized or anonymized data that has already been aggregated may remain, but no new processing occurs.

Redaction from future outputs.
If the participant's story, name, or identifiable information appears in published materials, it is not included in future versions. The project makes reasonable efforts to notify distributors and request removal.

Honest limits.
If data cannot be fully removed (e.g., because it has been shared with partners or published in permanent records), the project must acknowledge this limitation upfront in the consent process.

The right to be forgotten is legally enshrined in some jurisdictions (notably the European Union's GDPR) and applies to data held by organizations, not just governments. In Canada, federal privacy law (PIPEDA) does not use the term "right to be forgotten" but does require that organizations delete or anonymize personal information once it is no longer needed for the purpose it was collected.

In practice, the right to be forgotten raises hard questions: What if a participant in a longitudinal health study withdraws, but removing their data would compromise the validity of published research? What if a storyteller withdraws consent, but their story has been cited in policy reports? There is no universal answer. The principle is: respect withdrawal to the greatest extent possible, be honest about limits, and design systems that make withdrawal technically feasible.


32.6 Sensitive Locations

Some places, if mapped publicly, can cause harm. These are sensitive locations — places where the act of precise mapping itself creates risks to safety, dignity, or autonomy.

Examples of sensitive locations include:

Supervised consumption sites.
Facilities where people use drugs under medical supervision. Mapping these at address-level precision can stigmatize users, invite protests, or enable surveillance by police or hostile neighbours.

Women's shelters and domestic violence services.
Survivors of violence rely on confidential shelter locations. Publicly mapping shelters can enable abusers to locate survivors, putting lives at risk.

Immigration legal aid and settlement services.
In contexts where immigration enforcement is aggressive, mapping the precise location of legal aid offices or settlement services can expose migrants to risk.

Abortion and reproductive health providers.
In regions where abortion is contested or criminalized, mapping providers can invite harassment, violence, or legal action.

Supervised housing and supportive services.
Facilities providing housing or support to people experiencing mental health challenges, addiction, or homelessness. Precise mapping can enable NIMBYism (not-in-my-backyard opposition) or vigilante harassment.

Places of worship during prayer times.
In regions where religious minorities face persecution or hate crimes, precise mapping of mosques, synagogues, temples, or churches can create security risks.

Treatment facilities.
Addiction treatment centres, mental health facilities, or HIV clinics. Mapping these at high precision can identify users through proximity and routine.

For sensitive locations, the default approach is not to map publicly. If mapping is necessary for planning or service coordination, it should be done in restricted-access systems with clear governance about who can see the data, for what purpose, and with what safeguards.

Alternative approaches include:

  • Postal code or neighbourhood level only. Show that a service exists in a general area without specifying the address.
  • Aggregated service types. Show "health services" without specifying that a clinic provides addiction treatment or abortion care.
  • Internal maps only. Maintain detailed maps for coordination and planning but do not publish them.
  • Community-controlled access. Allow communities to decide who can see the map and under what conditions.

The determination of what is "sensitive" is not universal. A supervised consumption site in Vancouver may be publicly mapped because the community and local government support it; the same service in a rural area with hostile local politics may require full confidentiality. A mosque in Toronto may be publicly listed; a mosque in a region with rising hate crimes may prefer to remain unlisted. Sensitivity is context-dependent, and the community in question must have authority over the decision.

Mappers must also recognize emergent sensitivity — situations where a location becomes sensitive over time due to changing political conditions. A map produced in one legal or political context may become dangerous in another. Projects must have mechanisms to review and revoke public data if conditions change.


32.7 Children, Elders, and Vulnerable Subjects

Certain populations require heightened privacy protections because they have reduced capacity to consent, face greater risk of harm, or are subject to paternalistic systems that may misuse their data.

Children and youth.
Minors cannot provide legal consent in most jurisdictions; parents or guardians consent on their behalf. But parental consent does not override a young person's right to privacy. If a youth participates in a mapping project about mental health or substance use, their parents may not have a right to access that data. Projects involving youth must navigate dual protections: parental consent for participation, and youth privacy regarding content.

Schools are high-risk settings for privacy breaches. A school-based mapping project that collects data on students' health, family circumstances, or safety concerns must ensure that data does not flow to administrators, police, or child welfare authorities without clear protocols and student consent. In practice, this often means that sensitive youth mapping projects do not happen in schools at all, but in community settings where youth have more control.

Elders.
Older adults may face cognitive decline, social isolation, or coercion from family members. Consent processes must confirm that participation is voluntary and that the elder understands what they are agreeing to. If an elder's data includes information about assets, living arrangements, or health conditions, privacy protections are critical to prevent financial exploitation or unwanted placement in care facilities.

In Indigenous contexts, elders hold cultural authority, and their knowledge is often considered community property, not individual property. Mapping projects working with Indigenous elders must follow community protocols about who owns the knowledge, how it can be shared, and whether it can be recorded at all.

People with cognitive or mental health challenges.
Individuals experiencing psychosis, dementia, severe depression, or intellectual disabilities may have fluctuating or limited capacity to consent. Projects must assess capacity on a case-by-case basis and provide supported decision-making where appropriate. In some cases, a legal substitute decision-maker (guardian, trustee) may be involved — but this introduces power dynamics that must be carefully managed.

People in institutional settings.
People in prisons, psychiatric facilities, or long-term care may face coercion or pressure to participate. They may fear that refusal will result in punishment or loss of privileges. Consent in institutional settings is inherently complicated by power imbalances. Mapping projects in these settings require extra scrutiny, independent oversight, and strong assurances that participation is voluntary.

Heightened protections do not mean exclusion. Children, elders, and people with disabilities have the right to participate in Community Mapping. But their participation must be structured with additional safeguards, supported decision-making, and clear ethical oversight.


32.8 Data Minimization

Data minimization is the principle that projects should collect only the data necessary to achieve their stated purpose, and retain it only as long as needed. More data is not better. More data is more risk.

Every piece of personal information collected creates a potential privacy breach. Every attribute increases the risk of re-identification. Every extra field in a database is another thing that must be secured, governed, and eventually deleted. Data minimization reduces risk by reducing what exists to protect.

In Community Mapping, data minimization means:

Do not collect identifiers unless required.
If the project does not need participants' names, addresses, or phone numbers, do not collect them. Use participant codes or anonymous survey methods.

Do not collect sensitive attributes unless necessary.
If the project is about food access, do not collect immigration status, health conditions, or household income unless directly relevant. Each additional attribute increases re-identification risk.

Aggregate early.
If the analysis will ultimately report counts at the neighbourhood level, do not retain address-level data. Aggregate during data entry, so that precise locations are never stored.

Delete data when the purpose is complete.
If a project timeline is 2 years, personal information should be deleted at the end of year 2 unless there is a compelling, consented reason to retain it. Published outputs (maps, reports) can be retained, but the underlying identifiable data should not be kept indefinitely.

Limit access.
Data minimization also applies to access. If a project has 10 team members, not all 10 need access to identifiable data. Role-based access controls ensure that only those who need to see personal information can see it.

Data minimization is not just good ethics — it is increasingly a legal requirement. PIPEDA in Canada requires that personal information be limited to what is necessary and proportionate. GDPR in Europe enshrines data minimization as a core principle. Privacy regulators increasingly scrutinize organizations that hoard data "just in case" or retain information indefinitely without clear purpose.

In practice, data minimization requires discipline. It is easier to collect everything and decide later what to use. But that approach creates unnecessary risk. The discipline is to define the purpose first, identify the minimum data required, collect only that, and delete it when done.


32.9 Data Breach Response

No system is perfectly secure. Data breaches happen — through hacking, human error, lost devices, malicious insiders, or systemic failure. Community Mapping projects must plan for breach response before a breach occurs.

A data breach is any unauthorized access, disclosure, or loss of personal information. Breaches include:

  • A laptop containing identifiable participant data is stolen.
  • A database is hacked and participant names, addresses, or service use data are accessed.
  • A staff member accidentally emails a participant list to the wrong recipient.
  • A map with insufficiently anonymized data is published, and community members recognize individuals.
  • A partner organization shares project data with a third party without authorization.

Privacy laws in most jurisdictions require that breaches be reported to affected individuals and, in some cases, to regulatory authorities. In Canada, PIPEDA requires breach notification to the Office of the Privacy Commissioner if the breach creates "a real risk of significant harm." GDPR requires breach notification within 72 hours of discovery.

A breach response plan should include:

Immediate containment.
Stop the breach. Shut down compromised systems, revoke access, retrieve lost devices where possible, and prevent further data leakage.

Assessment.
Determine what data was compromised, how many people are affected, and what harm might result. If the breach exposed anonymous aggregate data, the harm may be low. If it exposed names, addresses, and sensitive health data, the harm may be severe.

Notification.
Inform affected individuals as quickly as possible. Explain what happened, what data was compromised, what harm might result, and what steps the project is taking. Provide contact information for questions and support.

Regulatory reporting.
If required by law, report the breach to privacy authorities. In Canada, this means the Office of the Privacy Commissioner. In the EU, this means the national data protection authority. In some provinces, this means provincial privacy commissioners. Some jurisdictions have mandatory timelines (e.g., 72 hours).

Mitigation.
Take steps to reduce harm. This might include offering credit monitoring (if financial data was exposed), providing security advice, coordinating with law enforcement (if a crime occurred), or removing publicly posted data.

Review and learn.
After the immediate crisis, conduct a post-incident review. What went wrong? What controls failed? What changes are needed to prevent future breaches? Update policies, train staff, and strengthen systems.

Breach response requires honesty. It is tempting to downplay a breach, delay notification, or avoid regulatory reporting. But transparency is both a legal requirement and an ethical obligation. Community members trust projects with their data. When that trust is violated, the project must take responsibility and act to repair harm.


32.10 Privacy Law Frameworks

Privacy protections in Community Mapping are shaped by legal frameworks that vary by jurisdiction. While this textbook does not provide legal advice, practitioners must have a working knowledge of applicable laws.

Canada: PIPEDA and provincial laws.
PIPEDA (Personal Information Protection and Electronic Documents Act) is Canada's federal private-sector privacy law. It applies to organizations collecting, using, or disclosing personal information in the course of commercial activity. PIPEDA requires informed consent, limits on collection and use, accuracy, security safeguards, and individual access to their own data.

Several provinces have equivalent laws: Alberta's PIPA, British Columbia's PIPA, and Quebec's Law 25 (which recently expanded Quebec's privacy regime significantly). In provinces without equivalent legislation, PIPEDA applies.

Public-sector organizations (government agencies, municipalities, universities) are generally governed by public-sector privacy laws such as Ontario's FIPPA (Freedom of Information and Protection of Privacy Act) or BC's FIPPA. These laws impose similar principles but with additional transparency and access-to-information obligations.

European Union: GDPR.
The General Data Protection Regulation (GDPR), in force since 2018, is one of the world's most comprehensive privacy laws. It applies to any organization processing personal data of EU residents, regardless of where the organization is located. GDPR enshrines principles including lawful basis for processing, data minimization, purpose limitation, security, accountability, and the rights of individuals to access, rectify, or delete their data.

For Community Mapping projects involving EU participants or partners, GDPR compliance is mandatory. This typically means obtaining explicit consent, conducting data protection impact assessments (DPIAs) for high-risk processing, appointing a data protection officer in some cases, and implementing technical safeguards.

United States: sectoral approach.
The U.S. does not have a comprehensive federal privacy law equivalent to PIPEDA or GDPR. Instead, privacy is governed by sector-specific laws: HIPAA (health data), FERPA (education records), COPPA (children's online data), and state laws like California's CCPA and Virginia's CDPA. Community Mapping projects working in the U.S. must identify which sectoral laws apply and comply accordingly.

Indigenous data sovereignty.
Privacy law frameworks developed by settler governments often fail to reflect Indigenous legal orders, collective rights, and community governance. Indigenous communities assert data sovereignty — the right to govern data about their people, territories, and knowledge according to their own laws and protocols.

OCAP (Ownership, Control, Access, Possession) principles, developed by the First Nations Information Governance Centre, articulate this framework: Indigenous communities own their data, control how it is collected and used, decide who can access it, and physically possess it. Privacy protections in Indigenous contexts must align with OCAP, not just PIPEDA or provincial law.

Privacy law is evolving rapidly. New regulations, court decisions, and international standards emerge regularly. Community Mapping practitioners cannot be expected to be privacy lawyers, but they must stay informed and seek legal guidance when navigating complex or high-risk projects.


32.11 Synthesis and Implications

Privacy in Community Mapping is not a checklist. It is a practice, a relationship, and a commitment. This chapter has presented technical methods (geomasking, k-anonymity), governance structures (layered consent, withdrawal rights), and legal baselines (PIPEDA, GDPR, sectoral laws). But synthesis requires stepping back and asking: what does privacy-first Community Mapping actually look like in practice?

It looks like mappers who start by asking "Do we need this data?" before "How do we collect this data?" It looks like projects that offer participants choices, not ultimatums. It looks like maps that show patterns without exposing people. It looks like data that is deleted when no longer needed, not hoarded indefinitely. It looks like breach response plans written before the first dataset is collected, not after a crisis hits.

Privacy-first mapping also looks like discomfort. It means accepting that some questions cannot be answered with the data you can ethically collect. It means rejecting precision when precision creates risk. It means saying no to funders or partners who want data you cannot share. It means slowing down when moving fast would compromise consent. These are not failures — they are signs of ethical maturity.

The implications extend beyond individual projects. Privacy norms in Community Mapping shape the field. If one high-profile project exposes participants to harm through careless data practices, it undermines trust for all subsequent projects. If mappers routinely publish address-level data without consent, communities learn not to participate. If projects treat privacy as an afterthought, we train a generation of practitioners who do not know how to do better.

Building a culture of privacy requires:

Education. Privacy protection must be taught in every Community Mapping course, included in every practitioner guide, and modelled in every case study.

Infrastructure. Privacy-preserving tools (anonymization software, secure data platforms, consent management systems) must be accessible, affordable, and easy to use.

Accountability. Projects that violate privacy norms must be called out, not excused. Funders should require privacy impact assessments. Ethics boards should scrutinize consent processes. Community members should hold mappers accountable.

Leadership. Senior practitioners, academics, and institutional actors must model privacy-first practices and refuse to participate in projects that do not meet baseline standards.

The synthesis is this: privacy is not in tension with good Community Mapping. Privacy is what makes good Community Mapping possible. It enables trust, protects the vulnerable, respects autonomy, and ensures that communities retain authority over their own stories. Privacy is a community value — and it must be central to the practice.


32.12 Privacy Impact Assessment Exercise

Purpose:
This exercise guides you through conducting a Privacy Impact Assessment (PIA) on a real or hypothetical Community Mapping project. PIAs are structured evaluations of privacy risks and mitigation strategies, increasingly required by funders, ethics boards, and privacy regulators.

Materials Needed:

  • A Community Mapping project (real or hypothetical) to assess
  • Access to project plans, data collection methods, and consent materials
  • This PIA template

Steps:

  1. Describe the project.
    What is the purpose? What will be mapped? Who are the participants? What data will be collected? How will it be used and shared?

  2. Identify personal information.
    List all types of personal information the project will collect. Include direct identifiers (names, addresses) and indirect identifiers (precise locations, demographic attributes). Mark which are essential and which are optional.

  3. Assess re-identification risk.
    For each type of spatial or demographic data, estimate re-identification risk. Could this data, combined with public information, reveal identities? Rate risk as low, moderate, or high.

  4. Identify sensitive locations or populations.
    Are any mapped locations sensitive (shelters, health services, places of worship)? Are any participants from vulnerable groups (children, migrants, people with precarious status)? Flag these for heightened protections.

  5. Evaluate consent process.
    How will consent be obtained? Is it informed, voluntary, and ongoing? Does it offer layered options? Can participants withdraw? Is the consent form clear and accessible?

  6. Review technical safeguards.
    What privacy-preserving techniques will be used (aggregation, geomasking, k-anonymity)? How will data be stored and secured? Who will have access? Are there encryption, access controls, and retention limits?

  7. Assess legal compliance.
    What privacy laws apply (PIPEDA, GDPR, provincial laws, Indigenous protocols)? Does the project meet legal requirements for consent, security, breach notification, and data minimization?

  8. Identify residual risks.
    Even with mitigations in place, what risks remain? Could data be misused by partners? Could a breach occur? Could the map stigmatize a community?

  9. Develop mitigation plan.
    For each identified risk, propose a mitigation strategy. This might include additional anonymization, restricting access, adding community governance, or deciding not to collect certain data.

  10. Document and review.
    Write a 2-3 page PIA summary covering steps 1-9. Share it with project stakeholders, community partners, and (if applicable) an ethics board. Revise based on feedback.

Deliverable:
A completed Privacy Impact Assessment (2-3 pages) for your chosen project.

Time Estimate:
2-3 hours

Safety and Ethics Notes:
Do not conduct this exercise on a real project without permission from project leadership and, if required, ethics board approval. If using a hypothetical project, base it on realistic scenarios but do not use real participant data or identifiable details.


Key Takeaways

  • Privacy is a community value, not an obstacle — it enables trust, protects the vulnerable, and respects autonomy.
  • Personal identifiable information in spatial data includes not just names and addresses, but also precise locations, movement patterns, and attribute combinations that allow re-identification.
  • Geomasking and k-anonymity are foundational techniques for protecting privacy while preserving spatial patterns.
  • Layered consent frameworks offer participants choices about how their data is shared, increasing inclusivity and respect for autonomy.
  • Sensitive locations (shelters, health services, cultural sites) require special protection and often should not be mapped publicly.
  • Privacy law frameworks (PIPEDA, GDPR, Indigenous data sovereignty) impose legal and ethical obligations that vary by jurisdiction and must be understood before beginning any project.

Recommended Further Reading

Foundational:

  • Office of the Privacy Commissioner of Canada. (2020). PIPEDA in Brief. Ottawa: Government of Canada.
  • First Nations Information Governance Centre. OCAP Principles.
  • Sweeney, L. (2002). k-Anonymity: A Model for Protecting Privacy. International Journal on Uncertainty, Fuzziness and Knowledge-Based Systems, 10(5), 557-570.

Academic Research:

  • Dwork, C., & Roth, A. (2014). The Algorithmic Foundations of Differential Privacy. Foundations and Trends in Theoretical Computer Science, 9(3-4), 211-407.
  • VanWey, L. K., Rindfuss, R. R., Gutmann, M. P., Entwisle, B., & Balk, D. L. (2005). Confidentiality and Spatially Explicit Data: Concerns and Challenges. Proceedings of the National Academy of Sciences, 102(43), 15337-15342.
  • Suggested: Research on geomasking techniques, privacy-preserving spatial analysis, and health data confidentiality.

Practical Guides:

  • UK Information Commissioner's Office. (2014). Anonymisation: Managing Data Protection Risk Code of Practice.
  • European Data Protection Board. Guidelines on Data Protection Impact Assessment (DPIA).
  • Suggested: Practitioner guides on privacy impact assessments, consent forms, and GDPR compliance for community-based research.

Case Studies:

  • Suggested: Case studies of privacy breaches in spatial data, lessons from public health mapping, and Indigenous data governance examples.

Plain-Language Summary

Privacy in Community Mapping means respecting people's right to control who knows what about them. When you map where people live, what services they use, or where they go, you're handling sensitive information. If that information gets out, it can harm people — by exposing them to stigma, surveillance, or danger.

Good Community Mapping protects privacy by using techniques like geomasking (hiding exact locations) and k-anonymity (making sure no one can be picked out from a group). It also means asking people for permission in clear, honest ways, giving them choices about how their information is shared, and letting them change their minds later.

Some places should never be mapped publicly — like women's shelters, supervised drug-use sites, or places where vulnerable people go for help. Mapping these places precisely can put people at risk. Instead, you show that services exist in a general area without giving away the exact address.

Privacy isn't about hiding information. It's about making sure communities stay in control of their own data — and that mapping helps people without causing harm.


End of Chapter 32.