AI Usage & Safety Policy for Nonprofits
Generative AI is rapidly becoming part of everyday work. While it offers new efficiencies and insights, it also carries risks if used carelessly. This policy provides clear guidance to ensure our organization uses AI responsibly, protects sensitive data, and maintains public trust. By following these standards, we safeguard both our mission and the people we serve.
AI Usage & Safety Policy for Nonprofits
(Template – Adaptable for Your Organization, from Mission Metrics)
1. Purpose
This policy establishes clear guidelines for the safe, ethical, and effective use of Artificial Intelligence (AI) tools by [Organization Name]. The purpose is to protect our employees, clients, donors, and mission while ensuring that AI enhances — not replaces — human decision-making.
2. Scope
This policy applies to all staff, contractors, and volunteers who use AI tools (e.g., ChatGPT, Claude, Gemini, Copilot) for organizational purposes, whether on personal or organizational devices.
3. Guiding Principles
- Mission First: AI must always serve our mission, never distract or dilute it.
- Human Oversight: AI supports; people decide. Final responsibility rests with staff.
- Confidentiality: Client, donor, and staff personal data must never be entered into AI tools.
- Transparency: AI-generated content must be reviewed, edited, and approved before public release.
- Equity & Fairness: Staff should check outputs for bias, stereotypes, or discriminatory content.
4. Acceptable Uses (with Examples)
AI can be a productivity tool when used responsibly:
- Communications
- Drafting first versions of newsletters, press releases, and social posts.
- Example: “Write a draft social media post thanking volunteers for helping with our food drive. Keep it under 100 words, warm, and professional.”
- Research & Summarization
- Summarizing grant eligibility requirements or reports.
- Example: “Summarize the key requirements of this grant RFP in bullet points.”
- Operations
- Drafting agendas, meeting recaps, checklists.
- Example: “Create a meeting agenda for a 1-hour board committee meeting on fundraising strategy.”
- Idea Generation
- Brainstorming themes for fundraising campaigns or volunteer engagement.
- Example: “Suggest five campaign themes to engage donors around back-to-school support for children.”
5. Prohibited Uses (with Red Flags)
AI must not be used in ways that risk data, compliance, or reputation.
Do NOT enter or use AI for:
- Personally Identifiable Information (PII): client names, addresses, SSNs, case notes, donor records.
- Health or financial data subject to HIPAA or IRS compliance.
- Legal, financial, or medical advice.
- Final publishing without staff review.
⚠️ Red Flag Prompts to Avoid
- “Draft a case note for John Smith, age 12, who came to our shelter last night.”
- “Analyze this donor database and recommend top prospects.”
- “Tell me what services this client qualifies for based on their personal circumstances.”
6. Data Security & Compliance
- Only anonymized or fictionalized examples may be entered into AI.
- Example: “Create a case study of a fictional family facing food insecurity, based on real but anonymous trends.”
- Outputs must follow organizational recordkeeping rules.
- Public-facing drafts generated by AI must be reviewed by staff before release.
- All usage must comply with applicable regulations (HIPAA, GDPR, IRS nonprofit standards).
7. Accountability & Oversight
- Department Heads: Ensure staff compliance in daily operations.
- AI Coordinator (or Champion):
- Tracks AI adoption.
- Provides training and updates.
- Reports annually to leadership on risks/benefits.
- Incident Reporting: Any accidental disclosure or misuse must be reported immediately to supervisors.
8. Training & Implementation
- All staff receive introductory training on AI, including:
- Safe prompts.
- Recognizing biased or harmful outputs.
- Spotting overreliance or errors (“hallucinations”).
- Refresher sessions occur annually or when major tools change.
- Quick-reference “Do & Don’t” cards are distributed to staff.
9. Review Cycle
This policy will be reviewed annually and updated as technology, laws, and organizational needs evolve.
Appendix A – Quick Reference
✅ Safe Prompts
- “Draft a volunteer appreciation letter (no names included).”
- “Summarize this 20-page report into key bullet points.”
- “List 10 blog post ideas about Preventive Poverty™.”
❌ Unsafe Prompts
- “Summarize Maria Lopez’s case file and recommend services.”
- “Generate a donor prospect list from our CRM export.”
- “Write medical advice for a client with diabetes.”