2 August 2025

Introducing The Salvation Army's AI Policy

A woman uses a laptop at a desk

Emily Casson and Becca Brydges introduce the territory’s new AI policy.

Artificial intelligence (AI) is a tool that, when used responsibly, has the potential to help us work smarter. While innovation is a key part of the territory’s value of boldness, however, the six values do not exist in isolation: we should be bold, but with compassion, integrity, respect, passion and mutual accountability.

The territory’s AI policy is a perfect example of that balance: sitting alongside the AI Guardrails, it outlines how to use AI responsibly, so that we are being innovative in an ethical, responsible way. The policy includes being transparent, asking for help, not using AI for decision-making about people, keeping personal data safe and fact-checking all AI-generated content.

Head of Digital Emily Casson and Assistant Head of Digital (Projects and Infrastructure) Becca Brydges explain more:

What is the AI policy?

Becca The AI policy explains how we should use AI responsibly at work. It is very complementary to the AI Guardrails, released by the Moral and Social Issues Council. The policy focuses on the operational side of things, providing practical guidance and compliance with legal and ethical standards so that what we do is in sync with our values.

Emily The Army’s values are at the heart of the policy and how we want people to interpret it. It’s not solely about how we can be more efficient: it’s about how we can be a values-led organisation, while embracing the benefits of new technology.

Who does the policy apply to?

Becca All Salvation Army personnel, including staff, corps leaders, volunteers and third parties. It can be accessed through OurHub. The AI Guardrails, however, are for everyone. 

Who wrote the policy?

Emily A lot of it was a collaboration between the IT team and Digital Section, but there were lots of consultations, including the AI Working Group and territorial leadership, and it was signed off by the policy management group and Cabinet.

What are examples of where using AI would be an advantage in our work?

Becca I think the first is productivity and efficiency, saving hours in people’s working weeks. There’s also potential for predictive modelling, for example within fundraising, data analysis to help assist someone in decision-making, or drafting initial ideas for content generation.

Emily In fundraising, we already use AI because it’s embedded in the online donation platform Fundraise Up. If people have agreed to cookies in their web browser, it will capture data points such as the time of day, if they’ve donated on the website before, anything that might impact their donation. If someone consistently gives £10, for example, it might prompt them to give £15. They have the free will to choose what they give, but it’s a more personalised experience and that has helped us raise more money.

What are some key things we should not do with AI?

Becca A key point is not to use AI to make decisions without any human oversight. The point of the policy and the values is to amplify human interactions, not replace them.

Another big no is uploading sensitive data into non-approved applications – if you suspect there has been a data breach, report it to data.protection@salvationarmy.org.uk.

Equally important is not generating content that contains bias, discrimination or inaccuracies. All our AI usage is reliant on human oversight – we should not be solely reliant on AI.

Emily On the creative side of things, we don’t want entirely AI-generated content. We suggest using AI as a starting point for drafts, but in our work as The Salvation Army we are all about valuing our humanity. For fundraising appeals, for example, we would always want to use a real person to illustrate a case study or story. If it were a sensitive subject, such as domestic violence, we would use a model, not an AI-generated image.

We also wouldn’t want, for example, an AI-generated picture of a Salvation Army officer – there is a reputational risk in that scenario, but it’s also not a very values-led thing to do.

How can people watch out for biases or misinformation creeping into content?

Becca Cross-checking and fact-checking. When you’re using AI to search for things, for example, it picks up lots of things on the internet, including deliberately incorrect April fool’s content, which AI can mistake for facts. Use trusted sources and look out for stereotypes or exclusionary language in any content produced. Ask colleagues for a sense-check and generally be cautious with anything that sounds overly confident or lacks citation from a reliable source. These are quite standard content creation practices, but they are more essential than ever.

Emily Bias can be harder to spot. If you asked an AI tool for an image of the head of The Salvation Army’s Digital Section, it would generate a white, middle-aged man, rather than me. It’s about not taking things at face value and using your judgement. That’s also why we say only use AI as a starting point for content creation, for example to help order your thoughts, as what it produces is not the finished product. It’s important to use AI critically and not overuse it. You can tell, for example, when someone uses AI to apply for a job – the CVs and covering letters are nearly identical, with no personality or humanity.

The policy includes labelling content generated using AI. Why is that important?

Becca It’s about transparency and trust. We want people to trust us and see that we’re transparent in everything we do.

Emily Even when we use AI in our donation tools, there is a webpage that is very clear and transparent about what and how we’re using AI and how people can opt out.

How should people go about implementing AI in their work?

Emily We encourage people to go to the AI Working Group and say, ‘I’ve got this idea or need, what tool would you recommend?’ A lot of our platforms and software now have AI embedded, so the AI Working Group can find something that meets your needs. Our goal is to answer two questions: Why do you want to use AI? How can you do it in the safest, most ethical way?

Why is Microsoft Copilot our preferred AI tool?

Becca Microsoft Copilot is integrated within our existing Microsoft package, something that comes with enterprise-level security, whereas a more everyday tool such as ChatGPT doesn’t – any information being uploaded within Copilot is effectively ring-fenced within our organisation. From a security perspective, that was a key factor.

Emily We have a partnership with Microsoft, too, which provides us with support, expertise and funding to help with infrastructure and implementation.

How is the Army considering the environmental impact of using AI?

Becca We are working with Research and Development to commission a research piece into this, so that we can measure how and where AI is being used within the Army and gauge its environmental impact. That work will take time, but it’s a key concern and consideration.

What training is on the way for staff?

Emily There are training materials and courses in development between the Digital Section, IT, William Booth College and data protection teams. We know this is new to a lot of people, so we want to provide basic training as well as material to help upskill personnel in certain areas to meet their requirements. It’s coming down the line – watch this space!

  • For more information about the AI policy, using AI in your work, or if you’re unsure whether your use of AI is appropriate, email aitech@salvationarmy.org.uk.

Discover more

Captain Mark Scoulding (Watford) talks about the Artificial Intelligence Guardrails developed by the Moral and Social Issues Council.

Captain Mark Scoulding considers how we might use artificial intelligence in mission and ministry.

As artificial intelligence shapes Google results, Ivan Radford considers how and what we search for.

Simon Hope considers the potential impact of artificial intelligence.