Ai and social care: towards a human rights approach.

On the 1st of February 2024, representatives of thirty organisations and individuals working in Adult Social Care met at the University of Oxford to discuss the benefits and risks of using ‘generative AI’ in social care. I was pleased to be part of the event organised by the University of Oxford Institute for Ethics in AI, the Digital Care Hub and Casson Consulting. 

I have written a bit over the years about Ai and most recently have reflected upon the potential of chat bots as well as the limitations of their use in social care. What is inescapable is that generative Ai is already being used to a not insignificant extent within the care sector, especially in assessment and care planning.

There are numerous responses to the development and introduction of innovative technologies. Put simplistically one response to all this is to resist and prevent – the expressed concerns around the risks of Ai removing that which is uniquely human, of it supplanting identity, were the same concerns aired in the early stages of human transplant surgery – yet those procedures have become normative today. One position, therefore, is to resist, challenge and remove new technologies. However, resistance and bans rarely work. Another response is to welcome with unbridled evangelical enthusiasm and a lack of critique.

Both positions, I would suggest are erroneous, and therefore my starting premise, is that faced with the inevitably of developments and further ‘progress’ we must develop frameworks and approaches that ensure that technology serves the public good, in an equitable, inclusive, and rights-based manner. It is essential not least in the most human of human industries, for social care, that it is critical to ask questions, raise concerns, balance risks, and adapt or adjust to accommodate cultural and societal technological mores. Concerns around privacy, the use of data, the centrality of individual choice and the advancing of the individual in person-led care and support can only be addressed through dialogue and mutual design development.

For me a critical starting point has to be a human rights-based approach to Ai and yet certainly for its application in social care there is a dearth of both research and writing on what a human rights-based approach to the use of Ai might look like. I offer one or two thoughts in this blog.

Many readers will be familiar with the PANEL principles which are the bread and butter of many human rights dialogues and models. The acronym means Participation, Accountability, Non-discrimination, Empowerment and Legality.

  • Participation – People should be involved in decisions that affect their rights
  • Accountability – There should be monitoring of how people’s rights are being affected, as well as remedies when things go wrong
  • Non-Discrimination – Nobody should be treated unfairly because of their age, gender, ethnicity, disability, religion or belief, sexual orientation or gender identity. People who face the biggest barriers to realising their rights should be prioritised when it comes to taking action
  • Empowerment – Everyone should understand their rights and be fully supported to take part in developing policy and practices which affect their lives, and
  • Legality – Approaches should be grounded in the legal rights that are set out in domestic and/or international law.

So what might PANEL mean for Ai and social care?

Participation – from the moment of design, through application and use, to evaluation and assessment Ai in a social care context must evidence the intrinsic role of the individual as a person, rather than just the individual as part of a collective. This is not without challenge because it means design and development with the end user not just as an optional consideration but rather in the driving seat of investment and priority. It means for instance that the development of time-saving care planning approaches using the machine-learning of Ai must take account of the individuality of the person rather than make generalist assumptions, however well based and broad in the use of harvested data they may be. As I have reflected before the interaction and encounter between two people at the end of which a care plan or an assessment has been completed CAN and will be aided and assisted by Ai (just as much as it has been by pens and iPads) but the moment the technology, the device, the model gets in the way of the particularity and uniqueness of encounter, is the time when false assumptions, stereotypes and prejudices are risked. There must be space for the unpredictable, the surprise, the uniqueness of the person to contradict the norm of others. That is just one example but this process of participation has to be ongoing and continuous in all parts of Ai development and application. The role of individuals as continuing co-designers and evaluators should be primary.

But participation in Ai whether in design or application and review necessitates an increased awareness and knowledge of those who use social care, an enhancing of the digital and technological skills of the workforce at all levels, and a robust engagement of all stakeholders. Participation cannot happen in the vacuum of ignorance and rarely is effective without prioritised resource.

Accountability – Some of the fear and reserve around Ai and its application to social care is rooted as much in a lack of awareness of accountability as in any risk aversion. We all need to know, not least with the rapid speed at which Ai systems and tools are developing, about who is accountable for the use and application of Ai not least in the lives of those who may have clinical or health vulnerabilities and around whom there may be issues of capacity and consent. There have been understandable fears, not least centred upon the human rights to privacy and autonomy, over the use of data. Data without clear codes of behaviour and conduct is dangerous; it is a digital story which can become a nightmare. If public distrust or concern is to be replaced by a positive adoption of new technologies as beneficial then there must be clear oversight on the application of Ai and not least on the collection, storing, access and ownership of all personal data. Personally, I do not believe the use of Ai to better enable consistency within an individual’s care and support pathway between diverse organisations and agencies is antithetical to individual citizen control and access. But the ownership and rights around personal data held by citizens must be much clearer than they currently are. Human rights approaches can massively assist that assurance.

Non-Discrimination – One of the earliest critiques of Ai and especially some of the very early versions of generative Ai was what appeared to be inherent system bias within the data that was being utilised. Human rights law and practice is very clear and consistent around the issues of individual identity and non-discriminatory practice. It would be antithetical to progressive social care for there to be built in bias within any Ai tools which served to limit the rights of individuals based on protected characteristics or other aspects of individual identity. Once again, the way to prevent this risk from occurring is robust evaluation of practice, open access to data utilised, and human rights frameworks which interrogate practice in a non-discriminatory manner.

Empowerment – For a long time those who have used social care supports and services and those who work within provider organisations have recognised that at the heart of all good and effective social care is an empowerment of the individual to take control of their lives, to be the directors of their own actions, and controllers of their own independence. At its best social care enables an individual to discover their unique identity, to flourish and thrive in their humanity. It is not a one-size fits all approach but one that validates the person as who they are and enables them to achieve their potential.

There is an untapped potential in the use of Ai within a social care and support context to further underpin the autonomy, control, choice, and empowerment of the person receiving support. But only if we enable such models and approaches to grow and develop. Critically this will involve a freeing of the regulatory noose that sometimes exists around care services, and which serves to stifle individual action and risk-taking, often in the name of safeguarding and protection, but frequently based on risk aversion and system protectionism. Again, there is a real potential for a human rights-based approach to Ai which enables empowerment to occur rather than consolidates control, power, and resource in the hands of a minority (whoever they might be but certainly including Ai developers and system owners.) The risk is the opposite occurs and the use of Ai results in a further limiting of human autonomy, and a use of models and tools which observe, monitor, control and assess without the direction or voice of the person being supported.

Legality – All human rights models, frameworks and approaches have a distinct and critical vein of legality running through them. The lack of legal protections and frameworks around emerging Ai is a matter of concern. It has always been the case that it takes a few years for legislators to catch up with emerging science and technology, and often by the time that occurs some not insignificant harms and mistakes can be made. In social care the use of Ai must be undertaken utilising existing human rights protections. That is why that ethical approaches to Ai whilst they are hugely valuable and underpin a human rights approach, on their own are less than effective without robust legal and juridical protections. There is a real potential for the social care community to not only self-police the use of Ai but to model its use for others – which is why the work of the Oxford group is so important. We should not be afraid of seeking to develop new legislation and of using existing laws in a robust manner. This may also mean being courageous enough as an international social care and human rights community to re-draw concepts such as privacy for the new technological age of Ai.

I hope some of these thoughts spark and continue a conversation. Ai is here, it is changing every second of every day. We dare not seek to hide our heads in the sand but as a social care community of citizen care and support individuals, frontline workers, and social care thinkers, we need to mould and influence that Ai tomorrow for the betterment of all.

Donald Macaskill

Photo by Michael Dziedzic on Unsplash