The following is based on a talk which opened the inaugural Scottish Care, Care Tech Assembly held in Glasgow on 19th June 2025.
I want to share a few personal thoughts about technology within social care and to do so under a human rights umbrella. And I want to do so from the perspective of someone who has been fascinated by tech and its potential to enhance human connection and humanity for an awful long time.
And I suppose I am also making a bit of an assumption as I start these reflections that I am among people who believe that technology, when rightly held, can serve humanity rather than diminish it.
The Past: listening to the echoes
Let me take you back to a room in a care home in Glasgow’s west end, many years ago. I remember as a student sitting with Mary, a former schoolteacher, now living with dementia. She was holding a photo of her late husband. It was tattered at the corners from love and remembering.
There was no iPad. No voice assistant. No tech-enabled reminiscence therapy. Just two humans, sharing a moment.
Technology hadn’t yet found its way into her world – but care had. Deep, relational, person-led care.
That care home was just along the road from the school I went to and the school she had once taught in. We were on the edge of the University campus – the same University where years earlier I had walked with my classmates one afternoon to visit the Computing Dept. I’ve spoken before about that experience – about how our class was invited along one afternoon to see what was, at that time, reputed to be one of the world’s fastest computers.
It was a whirring series of metal cupboards which literally filled a room with a standard screen and keyboard to operate them. This was well before the era of the ‘mouse’, so all instructions were laboriously typed in by complicated code.
What struck me then was the sense of sheer wonder with which the technicians and scientists viewed their work and its creation. It filled me with both a lifelong fascination for, and indeed a fear of, the power and potential of technology to change the world I inhabit for good and ill.
One of the men working on the computer at the time mentioned that when we had reached his age – an elderly 40! – the computer we saw that day would be the size of a cigarette packet. We laughed then. Nearly 50 years later after my flirtations with a BBC computer, an Amstrad, and an enduring love affair from the earliest Apples, loads of floppy discs and CD-ROMs, I am no longer laughing. The truth of a processor 1000s of times faster and more powerful than the machine that filled that Glasgow room is in the laptop I carry every day and the smartphone whose presence is, reluctantly, my essential life tool.
But sitting with Mary just along from that room and maybe not much more than a decade later we were to a large extent in a technology free zone.
And I suppose if you had asked me then in the past, I would not have even contemplated that tech could be so contributive to and could potentially enhance care. It wouldn’t have been in my worldview.
Then over time myself and others would have expressed fears that technology let loose would replace that care – that somehow, warmth and connection would be coded out of our lives.
I reflected that in my own writing in 2017-18 when I wrote Tech Rights where I explored the interplay of human rights and the potential of machine learning, Ai and the internet of things and suggested that there was as much promise as well as danger in the future.
So, what of the present?
The Present: dancing with change
Today, we stand in a time of rapid digital transformation. In Scotland, digital social care records are becoming the norm, not the exception. We’ve seen technology bridge gaps, not widen them – when done well.
Care homes have used iPads to reunite families over FaceTime. Sensors now help monitor health in the background, offering dignity and safety without intrusion. People living in their own homes are using smart tech to remain independent for longer.
But the truth is, this progress wasn’t born from a boardroom or a policy paper. It was born from pragmatic intent and a ground-up desire to do things differently – and yes let’s be honest also to do things with an economy of time and cost. The current and I suspect the future design of tech will be responsive and reactive to circumstance and to need more than planned intent and policy.
And when the history of tech in care is written the influence of the pandemic will be clear and transparent.
Our present was born from a pandemic – from necessity, from desperation, and yes sometimes even from love.
We had to learn quickly that technology must never be the master of care. It must be the servant of humanity- and especially of the most valuable.
What does the future hold?
It holds possibility. But only if we choose it wisely and in some places that future is already happening…we are already witnessing circumstance where an older person’s voice activates not just a light, but a lifeline of connection and assurance. We are and can create a future where digital tools aren’t cold, but compassionate. Where artificial intelligence doesn’t make decisions for people, but with people. A future where every innovation asks not “What can this do?” but “Whom does this serve?”
But it is how we build that future, design that tomorrow, which is all important and which an event like today contributes so much to.
Many of you will know of the work of the Oxford Institute for Ethics in Ai and the Digital Care Lab and how after over a year of creative collaboration and co-production, a framework for the responsible, ethical and rights based use of Ai in social care has been developed. That work is progressing, growing and becoming even more influential and I would commend it to you.
But the art will be, the essential requirement will be, how do we turn such frameworks and models into the automatic and instinctive actions of a system and of stakeholders who in straitened economic times and faced with the demands of immediacy might be tempted to take short cuts, go for the cheapest or easiest option, both of which usually risk the rights, autonomy, control and agency of the citizen?
The Oxford work and others have shown that the critical way in which you embed an ethical and human rights-based approach has to be through the democratising of design and the granting of control and agency to citizens. Too often I fear we pay lip service to this fundamental principle – the sense of individual citizen control over data – even a discussion I had this week on the principle of revocability – showed just how hard it is for systems which get too big, too distant from the user, from the citizen whose story is the data, to be open to approaches where citizens can re-write, change, edit and remove their data.
The excuse often given for the radical individualising of control around Ai, tech in general as well as in care and support, is that it is too difficult to get to the individual level – that’s an excuse I heard so often during the blanket decision making of the pandemic – and it is today tosh and nonsense.
A colleague recently told me about the work of Pol.is and the more I have looked at it the greater its potential for democratising decision and consent seems to me. Some of you might know of the Pol.is work which has been going on and developing in Taiwan.
Pol.is is a digital tool used in Taiwan to facilitate large-scale public deliberation. It was notably employed by the Taiwanese government as part of the vTaiwan and Join platforms to gather public input on complex policy issues.
‘How it works:
- Citizens respond to open-ended questions and vote on other people’s comments.
- Pol.is uses machine learning and data visualisation to group participants based on shared opinions, highlighting areas of consensus and disagreement.
- Unlike traditional polling, it avoids polarization by encouraging constructive dialogue and surfacing common ground.’
It is a tool which already has helped to bring democracy and decision making to very small and local community levels – its adaptation and use in settings such as residential care, community groups has, I think, huge potential, not least in its approach to inclusivity and achieving consensus and agreement.
Just imagine how such an adapted tool to enable individual participation and decision making could be used in a social care context around data control, management and use?
When I wrote my extended thought piece on TechRights I was very clear that the future developments of Ai, machine learning, the Internet of Things and robotics could and should only be enabled through a robust and ethical human rights framework – everything I have seen in the intervening 8 years convinces me even more of the validity of that assessment.
Human rights have to be the baseline not just bolted on; we need person led tech not just person-centred design.
And we continually need to re-design how we implement rights in technical practice. I think it is the task of all of us, no matter where we are in the pathway of design and development, in use and implementation in care environments, to consider our human rights and how they impinge in the use of tech and digital.
Here is my latest musing: a Human Rights Framework based on the acronym H.U.M.A.N.I.T.Y.
H.U.M.A.N.I.T.Y. Framework
H – Human Dignity
- AI must enhance, not replace, relational care.
- Systems should respect inherent human worth, prioritising the individual over efficiency.
- Insist on the truth that care is a human act, not a mechanical function.
U – Understanding
- AI must be developed with contextual awareness of care realities.
- Systems should reflect the lived experience of people receiving and giving care.
- Understand the cultural, emotional, and social nuance, which are vital to ethical care.
M – Moral Responsibility
- Those designing and deploying AI must act with ethical integrity.
- Care providers have a duty to ensure AI is used in ways that align with care ethics and human rights.
- There needs to be a stress on values-led leadership and responsibility in the care sector.
A – Autonomy
- Individuals have the right to make informed choices about how AI affects their care.
- AI systems should promote control and consent, not paternalism.
- Autonomy is essential for citizenship and empowerment in care.
N – Non-Discrimination
- AI must be designed and tested to eliminate bias and promote equity.
- It should support inclusivity, particularly for those often excluded: older adults, disabled people, ethnic minorities.
- There should be a built in commitment to social justice and fairness.
I – Integrity
- Use of AI must be transparent, honest, and accountable.
- Integrity means being able to explain and justify AI decisions, especially when they affect people’s lives.
- Create a sense of moral coherence in digital and care governance.
T – Trust
- Build trust through co-design, openness, and clear communication.
- Trust is sustained through relationships, not just systems.
- Trust is the glue of good care and support – and it must be protected in tech use.
Y – You-Centred
- AI in care and support must be person-led, not system-led.
- It must serve the individual’s rights, values, and story – not just operational efficiency.
- Care and support are affirmed as a deeply personal, relational, and human experience.
Last year, I visited another care home. There, a woman named Ishbel had started using a voice assistant. She said, “I call her Alexa, but I treat her like she’s my lassie. I tell her goodnight, and sometimes, she tells me the weather. But mostly, she makes me feel I’m still part of the world.”
That’s the heart of it.
We’re not building devices. We’re building belonging.
So, as we look ahead, let us walk forward not with fear of what we might lose, but with hope for what we can gain.
Let us shape a future where digital doesn’t dim the light of care, but reflects it—brilliantly, boldly, and beautifully.
Let us- together- hold hands with tomorrow.
Donald Macaskill