Can Ai enhance the humanness of care? The place of ethics and the unfolding of dignity.

This past week I spent a couple of days in Oxford with colleagues from around the UK exploring the responsible and ethical use of Ai in social care. It was a tremendous opportunity to meet folks who I had been virtually chatting to for over a year as we all struggled to draw up a set of statements which would describe the critical role and importance of an ethical framework for the use of Ai.

I’ve described in an earlier blog the work which led to the publication on Thursday of a Guidance document on the responsible use of Ai in social care. The Guidance states that:

‘The ‘responsible use of (generative) AI in social care’ means that the use of AI systems in the care or related to the care of people supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.’

I found the Guidance document to be immensely helpful. It is presented as a series of “I” and “we” statements in different twelve domains including choice and control, accessibility, data privacy, transparency, accountability and so on. I hope that the widespread use of this Guidance will help to ensure a growing ethical use of Ai in social care across the United Kingdom and beyond.

The group also published a Call to Action paper which frames the ongoing and continuing work which needs to be undertaken by all stakeholders. Organisations and individuals are invited to sign the statement and I would encourage you to read it.

At the event on Thursday I was personally honoured to chair a roundtable discussion on some of the ethical questions and main issues. They included an exploration of issues of privacy, of data usage and security, the challenge of addressing bias and the necessity of equity and inclusion. Indeed, as I said one of the challenges for us all is how we enable Ai to have access to data which is truly representative of all who use care and support services and all who work in them. If you are not able to share your story your voice will not be heard.

But perhaps the issue that kept coming back to my mind throughout the debates and deliberations is the issue of how Ai can be used to enhance human presence and what can be done to ensure that the increased use of Ai will not lead to the reduction in human care especially in our contemporary constrained economic circumstances. There is a very real fear that people will lose their jobs, and that the continuity of human care will be diminished. There is an anxiety that the use of Ai models and tools becomes not only more convenient but irresistible – and what does that do for both the valuing of care and its very nature? Does it even further risk embedding the myth that care is about a series of technical transactions and inputs rather than in essence a dynamic and exchange of relationship?

Indeed, I think there is a certain paradox at the heart of our digital age. As we welcome the rise of artificial intelligence into the intimate spaces of human care, we risk losing the very essence we seek to uphold: the authenticity of human connection. But I’m increasingly of a more positive view which is that if we are both wise and bold, Ai might serve not to diminish but deepen the fabric of our shared humanity.

In recent years, the narrative around Ai in social care has largely oscillated between two poles: the promise of efficiency and the fear of dehumanisation. Both are valid. But there is a third path – one that threads the ethical with the empathetic, the technical with the relational. It is here, I believe, that the true potential of Ai resides.

To understand what this might look like, we must begin not with the technology but with the individual. A person receiving care is not a passive recipient but a bearer of stories, history, culture, and identity. Ai, when framed within a human rights-based approach, can help us see the whole person – not just the task, the illness, or the need.

The work of the Oxford Institute for Ethics in AI is a beacon in this regard. Their commitment to embedding ethical principles into the architecture of Ai speaks not only to technical excellence but to moral vision. Through initiatives like the Responsible Use of Generative Ai in Social Care, and the work of its various groups such as care workers and those who use care and support services, together with the Tech Suppliers’ Pledge and the Ethical Principles Working Group, we are reminded that the design and deployment of Ai must be participatory, transparent, and accountable. It must be rooted as I’ve suggested elsewhere in the PANEL principles: Participation, Accountability, Non-discrimination, Empowerment, and Legality.

What does this mean in practice? It means involving individuals who draw on care and support, their families, and frontline workers in the very conversations that shape these technologies. It means ensuring that Ai is used not to replace human contact, but to create more time for it – to reduce administrative burdens, to surface patterns of wellbeing, to allow caregivers to be more present, more responsive, more human.

It also means vigilance. Ai is not neutral. It reflects the values and assumptions of its creators. Without careful scrutiny, it can replicate bias, entrench inequality, and obscure the voices of those most marginalised. But with principled and ethical stewardship, Ai can be a tool of liberation. It can offer us new ways to understand loneliness, to respond to distress, to design systems that are as compassionate as they are intelligent.

To embrace this potential is to reject the false dichotomy between care and code. It is to believe that technology, at its best, is a mirror held up to our deepest values. The challenge – and the invitation—is to ensure those values remain visible.

I would even go further and argue that the real potential of Ai in social care is that it can and will enable an even greater and better humanisation of care. It can and will make us better at the art of care and support, it has the potential to let us discover new and better ways of being at our most human in the exchange of care support.

Let us not be seduced by the shimmer of innovation, nor paralysed by fear. Let us be discerning, courageous, and most of all, relational. For in the end, Ai should not distance us from each other. It should draw us nearer.

And in that spirit, I leave you with lines from the Austrian poet Rainer Maria Rilke:

I Am Much Too Alone in This World, Yet Not Alone

I am much too alone in this world, yet not alone
enough
to truly consecrate the hour.
I am much too small in this world, yet not small
enough
to be to you just object and thing,
dark and smart.
I want my free will and want it accompanying
the path which leads to action;
and want during times that beg questions,
where something is up,
to be among those in the know,
or else be alone.

I want to mirror your image to its fullest perfection,
never be blind or too old
to uphold your weighty wavering reflection.
I want to unfold.
Nowhere I wish to stay crooked, bent;
for there I would be dishonest, untrue.
I want my conscience to be
true before you;
want to describe myself like a picture I observed
for a long time, one close up,
like a new word I learned and embraced,
like the everday jug,
like my mother’s face,
like a ship that carried me along
through the deadliest storm.

Quoted at https://poets.org/poem/i-am-much-too-alone-world-yet-not-alone

May our approach to Ai like good care, remain open – never blind – and always rooted in the unfolding of one another’s dignity.

Donald Macaskill

Photo by Neeqolah Creative Works on Unsplash