Is the human race or our humanity at risk? AI and social care.

Dire warnings about the end of the world are normally the stuff of placard bearing street preachers prophesying Old Testament style global ending but in the last week mainstream media has been carrying concerns about imminent extinction. The source of such threat not inter-galactic invasion or even ecological catastrophe but a very human development, Artificial Intelligence.

When I first wrote at length about AI and its role in the world of social care there were even at that time some highly respected scientists who were warning about the threats together with advocating the potential of AI. Stephen Hawking even told the BBC in 2014 that “The development of full artificial intelligence could spell the end of the human race.”

This past week a popular British newspaper was amongst many who stated:

“A dramatic statement signed by international experts says AI should be prioritised alongside other extinction risks such as nuclear war and pandemics.

Signatories include dozens of academics, senior bosses at companies including Google DeepMind, the co-founder of Skype, and Sam Altman, chief executive of ChatGPT-maker OpenAI.

Another signatory is Geoffrey Hinton, sometimes nicknamed the ‘Godfather of AI’, who recently resigned from his job at Google, saying that ‘bad actors’ will use new AI technologies to harm others and that the tools he helped to create could spell the end of humanity.”

As well as the prospect of human extinction there are also some more immediate perceived challenges brought about by the increased use and dependency upon AI and chief amongst them is the impact on the workforce in certain industries. This was evidenced in the much-publicised news a few weeks ago that British Telecom would be laying off thousands of workers and that a fifth of those job cuts will come in customer services as staff are replaced by technologies. BT Chief Executive Philip Jansen stated:

“Whenever you get new technologies you can get big changes,”

“generative AI” tools such as ChatGPT – which can write essays, scripts, poems, and solve computer coding in a human-like way – “gives us confidence we can go even further”.

Mr Jansen said AI would make services faster, better and more seamless, adding that the changes would not mean customers will “feel like they are dealing with robots.”

Balancing the voices of pessimism are those like Stephen Marche who writing in the Guardian stated:

“In the field of artificial intelligence, doomerism is as natural as an echo. Every development in the field, or to be more precise every development that the public notices, immediately generates an apocalyptic reaction. The fear is natural enough; it comes partly from the lizard-brain part of us that resists whatever is new and strange, and partly from the movies, which have instructed us, for a century, that artificial intelligence will take the form of an angry god that wants to destroy all humanity.”

In Scotland this past week the Innovation Minister Richard Lochead addressing the Parliament stated:

“These tools, known as “Generative AI”, will have an impact on jobs.

… for example, OpenAI claims that GPT-4 can achieve the same as a top 10% law student in bar exams.

… But it is important to not lose perspective on AI.  Most experts do not believe AI will be able to supersede human intelligence without several new breakthroughs, and no one knows if or when those will happen.  At the moment, talk of an impending “singularity” which means machines thinking for themselves without needing humans still involves a large dose of fiction. Essentially, for now at least, AI is just a tool.”

So, has any of this got anything to do with the world of social care and support? Is there threat or potential from AI or is the true reality something much more nuanced and complex? In the midst of massive global challenges around workforce and sustainable and affordable social care systems; in a world where more of us are living longer and seeking to achieve the benefits of positive ageing, is AI a trap or a panacea?

Certainly, there are some politicians whose imaginations seem to have been captured by the prospect of fiscal savings and workforce re-design on the back of AI and the Internet of Things. Most prominent amongst these new converts and evangelical prophets of a new tomorrow in social care and health has been Steve Barclay the current UK Health Secretary. Barclay spoke to the Daily Telegraph and suggested that having been inspired by a recent trip to Japan that the dawn of new social care possibility was coming soon.

“The health minister suggested robots and AI can help in better supporting patients and reducing demand on health and social care staff.

He said there was a need to consider and adopt other nations’ “innovative” approaches to health as the UK government attempts to cut NHS waiting times after Covid-19 and improve care for the elderly.

Mr Barclay said the Japanese were “world leaders in their use of tech” and that they have invested in a wide range of technologies, including robotics, “as a key way of getting care to patients,” adding “that is something we can learn from”.

Silver Wing uses several types of robots in its care homes. At its Shin-tomi nursing home in Tokyo, humanoid talking robots interact with patients, including those with dementia, and lead them through various recreational activities.

Other robots monitor patients as they sleep, alerting caregivers if the individual is agitated or attempting to rise out of bed. One robot also assists carers as they lift patients out of their beds and into wheelchairs, a physically demanding task.”

The article also details the extensive growth in the use of care bots across Japan:

“In the year to March 2017, £39.5 million was spent by the Japanese government to introduce robots into 5,000 facilities across the country. By 2018, an excess of £236 million had been spent on funding the research and development of such devices…

Japan has been investing in the development of elder care robots to help fill a projected shortfall of 380,000 specialised workers by 2025.”

All very appealing but I think we need to go beneath the surface of the hyperbole of ‘care-bots’ and AI not least in that we are operating with very different cultural and societal attitudes to ageing and the place of the older person in community and family.

There is an inescapable reality that in Scotland we are ageing as a whole population and that with little inward migration (even from other parts of the United Kingdom) this is likely to be an inescapable trend. We will need to alter the way in which we deliver social care and our dependence upon a human workforce is part of that reappraisal. I have long advocated that there is real potential in the use of in-home technology to enhance preventative care and support, promote self-management and to increase independence. It is self-evident that technologies including those utilising AI will be central to this fostering and enabling of greater in-home independence. It is also a truism that we will increasingly see the use of robotics (care-bots) in residential and nursing home settings although earlier hype needs to be balanced both by resource and investment reality and by demand. For instance the much vaunted ‘Pepper, robot is no longer being manufactured.

The Scottish Care driven project ‘Care Technologists’ is at the heart of embedding technological solutions within a human-rights based and person-led model of supported living and care home use.

As AI develops at an even faster pace there will be some tasks in a care home or supported individual’s home which can be both accelerated and automated. We are seeing some organisations investing not insignificant sums in introducing innovations which reduce time, remove duplication and introduce consistency. But in the most human of industries and sectors such as social care questions need to be asked about the boundaries and constraints that are required to ensure a human rights based and citizen controlled use of care technologies and AI.

So there are companies already utilising AI to write care plans for an individual by utilising existing models, predictive responses and assimilating with personal data. But is a care plan not something more than a tick-box, pro-forma exercise? Is it not also a plan for living and support which arises from the formation of relationship which enables instinctive and intuitive skill-based dialogue to occur and happen. I have lost count of the number of assessors and clinical and care practitioners who have over the years told me that it is not so much what someone says and discloses in an assessment interview that matters, but their mannerism and mood, their engagement or otherwise in an encounter, that influences the exchange. The oft quoted comment that we learn as much from what is not said or how someone expresses themselves is perhaps not so easy to calculate algorithmically by a machine.

It is also true that many care settings are increasingly using care-bots to provide company and reduce isolation, to act as tools of reminiscence for people who would otherwise be alone or distressed. But what are the limits of such usage? Growing old is not just about remembering the past but re-shaping and re-forming a future which is surely something which we do best in human relational interaction and exchange?

And thirdly, a point I have made in numerous contexts, at those ultimate moments of living and loving, is a technological presence, a care-bot what we want? Can a machine soothe distress or grant assurance, reduce fear, or bestow solace? And even if the answer to these questions is yes, the question remains as to whether in a human society shaped by dignity, they should? Just because something is possible does not make it desirable.

That is why framing the use of AI in social care and health settings and relationships becomes a primary question of the moment. It is why we need to pay attention to work which seeks to frame AI within a human rights context because there is more than enough evidence of biases and discrimination in its potential use, not least for the older citizen. A world first study on ageism in AI published by Monash University in Australia a couple of months ago makes for worrying reading. As the lead author of the study Dr Barbara Barbosa Neves states:

“AI can perpetuate ageism and exacerbate existing social inequalities,”

“When implementing AI technologies in aged care, we must consider them as part of a suite of care services and not as isolated solutions.”  And

“The use of AI in aged care must be done with consideration of the potential impact of these technologies on well-being, autonomy, and dignity of older residents.”

A human rights based approach to AI is critical to its success in social care and it is wholly erroneous to equate a human rights based approach with an ethical approach – the two are palpably not the same. I would commend the work of Scottish Care, VOX and the Alliance and other partners in this field together with the publication ‘ A Digital Cage is still a cage.’  and the succinctly practical ‘If I Knew Then What I Know Now.’  But we need more of these approaches and boundary setting maps of our AI reality.

In all that I have read recently about technology and AI and the changes it might bring about for the workforce and those who receive care, one recent article has for me got to the heart of the potential of AI to change the dynamics of human relatedness and in particular caregiving. Emily Kenway writing in the Guardian a couple of Sundays ago, in a movingly insightful piece writes:

“… are we missing something about the potential impact of these technologies on caregiving?…

Patience, confidence, purpose – it seems that caregiving generates faculties many of us consider desirable. Perhaps caregivers know something under-recognised in discussions of care and tech: that care, like love, is multidimensional – the good and the difficult coexist.

Prof Shannon Vallor is concerned that the brave new world of care tech has overlooked this dimension of caregiving in its laser-like focus on alleviating hardships. Her work as a philosopher of technology, currently at the Edinburgh Futures Institute, is drawing our attention to the ways in which jettisoning care to the machines might mean we lose important capabilities.

There is a paradox at the heart of care tech. If Vallor is right, then caregiving is a crucial route through which we can help realise our humanity. The “benefits of being a caregiver scale”, and the growing body of evidence underpinning its development, suggest she might be. In this case, the technologies being developed on behalf of caregivers to free them from their “burden” may have an unexpected cost: the loss of important human capabilities. But experts are clear that technology can be vital for reducing caregivers’ load, too. Paradoxically, then, while tech may prevent us reaping the rewards of caregiving, it may also enable them.”

I have much less concern about global annihilation being the gift of AI but am more concerned at the loss of the dynamic of care and relatedness which might by default and lack of planning result from it. The human race may not be at risk from AI but our humanness just might.

Donald Macaskill

Last Updated on 3rd June 2023 by donald.macaskill