Skip to Main content

Blog: AI in social care

15 July 2025

In this blog, Stephanie Griffith, our Innovation Manager, shares insights from her recent experience at the AI in Social Care Summit at Oxford University. 

I recently had the pleasure of being invited to the AI in Social Care Summit at Oxford University. It celebrated the achievements of the Oxford Project,  a year-long collaboration between care providers, technology developers, researchers, care workers, and people who draw on care and support. The day was filled with bold conversations about AI, ethics, and the future of social care.

What struck me first about the work was how firmly it was grounded in the values we all try to uphold in social care. The work is hosted within the Institute for Ethics in AI and takes human rights as its guiding principle. The coproduction was strong. That meant there were open, direct and stimulating discussions, embraced with the confidence of a truly inclusive conversation. 

The central question under discussion was about how we could harness AI's potential while protecting the fundamental values of good social care. 

Some very clear challenges surfaced:

Bias

AI is biased because it relies on the data it has available. One person gave the example of a princess; if you ask it to generate a picture of a princess, what you get may not reflect the world we live in or outside of the internet.

How, then, could this be used to help develop personalised responses to people? 

One answer is to improve the data that gets fed into generative AI. Generative AI uses the data it has to predict what comes next. It compared to a high-functioning word processor. By feeding more  data into AI, it can generate more representative insights and mitigate against bias. This is where the idea of ‘training’ AI comes in.

Data protection

The more we can use data that reflects the diversity of people’s lives to train our AI systems, the more we try to diminish bias. But we need to make sure people give their full and ongoing consent for their data to be shared: once we’ve put data into an AI system, it will have a lasting impact. It isn’t as easy as deleting a file or a web page. This is equally true for AI, we might use to carry out  admin tasks or technology that helps keep people safe.

Trust is key to getting the data right, which can only be built through genuine coproduction and human-centred relationships nurtured by people.

People are key

We heard from social care workers who had concerns about their work and skills being eroded from social care services. Social care workers, people accessing care and support and experts alike acknowledged that social care needs more people, and that human interactions and decision-making will always be essential. This should be our priority.  As one social care worker noted: “[AI] should support us to be better human beings at our jobs, making the people we provide care for happier, better, and healthier.” 

Regulation

The point was raised more than once: in the highly regulated world of social care, the use of AI remains unregulated. While regulation can sometimes be seen as a barrier to innovation, it’s needed to define safe spaces so that people can confidently make the best use of AI.

What’s next

The event marked the launch of co-produced guidance and a call to action which was developed over the past year. They focus on the role of AI as supporting the "fundamental values of care, including human rights, independence, choice and control, dignity, equality, and wellbeing."

The message was clear: AI should serve and support our vision of good social care, not the other way around.

Reflections on the day

Some of the concerns people raised are similar to those we’ve been hearing during our work to assess the digital potential of social care in Wales. They may be more general concerns about the use of technology in social care, amplified by the recent rise in the use of generative AI. 

The summit was a day of inspiration and insight, filled with passionate discussions on  how AI can enhance social care. The commitment to human connection, co-production, and collaboration was evident throughout. As we move forward, it’s crucial to make sure that we proceed with thoughtfulness; focusing on ethical and responsible development as we explore the implementation of AI. This echoes the messages we heard at the Digging Deeper conference hosted by Welsh Government in February.

AI and social care in Wales

We’ve published a new AI guide for the social care sector in Wales. The new guide has been funded through the AI Commission for Health and Social Care in Wales. 

Cardiff University are also exploring the sustainable impact in the field through its Centre of Social Care and Artificial intelligence Learning (SCALE) programme. You can find out more about this work here: Centre for Social Care and Artificial intelligence Learning (SCALE)

In the fast-paced world of generative AI, it’s crucial that we  collaborate with our partners so that we can connect the information and support that’s available for people working in social care. Through this work, we’ll continue to seek out opportunities to bring learning and evidence to Wales. 

NB. I used CoPilot to create the first draft of this blog!