AI Usage at the Compost Education Centre
August 16, 2025
Every week, I receive surveys, webinar invites, and articles about nonprofit AI usage. I’m not an expert, but it’s not difficult to see how tools like ChatGPT are polluting both the internet and the environment.
With every iterative use of a large language model, we move farther away from authentic human creativity and writing. AI tools are fed with AI-generated content (and so on, to the nth degree) to generate new content. As we increase the quantity of AI-generated content, the amount of human-generated content fed into models will proportionately decrease. And without high-quality human inputs, large language models deteriorate, and the internet will (or arguably has already) become littered with a reduced quality and diversity of information. Yuck! And if you care about the environment, let’s try to remember the staggering amount of electricity demand and water consumption that’s wrapped up in AI usage.
And the weirdest thing about it from a cultural perspective? Because everyone else is using AI to churn out reams of documents that take up the time and attention of our shared stakeholders, it can start to feel like you also must use AI to keep up. Carleton University’s “Charity Insights Canada Project” shares snapshots of how other nonprofits around the country are using AI. Most respondents are interested in exploring or expanding the use of AI tools in grant writing and reporting, marketing and communications, fundraising campaigns, etc.
I can’t help but wonder, “Will a prospective funder recognize that I put in a lot of time, energy, and emotions to actively researching, writing, and proofreading this proposal? Will they be able to tell that it came from me instead of an AI tool? Will what I write somehow end up as an input into someone else’s AI-generated grant proposal?”
We reflected upon the current state of AI usage at a recent staff meeting. People shared their experiences and whether they’ve ever used it at work. We shared a consensus that we have no interest in using AI. If we use AI, it is largely by accident, and we’re seeking out resources to help us discern what is AI-generated and what is not. Following our conversation, I drafted an AI Usage Policy to communicate recommendations, usage risks, and a framework to guide employee usage when/if it happens.
In addition to the specific AI usage policy (see below) to guide employee behavior, we’ve also added a line to our template job application process discouraging the use of AI. We don’t want you using ChatGPT to write a cover letter! We want to hear from you as a creative human with unique experiences.
Let us know if you have any thoughts! And feel free to use the AI Usage Policy below for your own workplace.
AI Usage Policy
There are specific risks associated with using large language models (LLMs) like ChatGPT and AI-generated content for the workplace, information quality, and internet quality.
- Workplace risks include data governance (i.e. organizational data is transferred outside of organizational control as an input to an LLM, which can then be stored by the LLM for other uses) and data security (i.e. sensitive/private information could be entered as an input and then leaked).
- Information quality risks include email phishing campaigns, influence campaigns (i.e. dissemination of misinformation/disinformation to influence beliefs and behaviors), and inability to identify LLM-generated content (i.e. there are no good tools to identify AI-generated content, which means that misinformation/disinformation can propagate).
- Internet quality risks are related to how LLMs operate, which is that they inherit inaccuracies and biases present in the data. With ongoing and increased usage of LLMs, inaccuracies and biases will propagate and result in a lowered quality of information on the internet.
In addition to the risks above, we feel strongly that human creativity and critical thinking can’t be replaced – only mimicked – by generative AI.
Based on these risks and our values, use of LLMs and generative AI tools are discouraged at the Compost Education Centre. However, if deemed necessary, the following points guide employee usage:
- Employees may use Copilot (microsoft.com) to generate content. No other platform is approved for use.
- Employees will not rely on AI summaries (e.g. Google’s AI Overviews) to gather information from the internet, but they will instead click through to search results.
- Employees will not enter sensitive or confidential data into an AI tool.
- Employees must edit, proofread, and check AI-generated content for accuracy and biases before publication/dissemination.