As the calendar turns and we stand at the threshold of a new year, I find myself drawn to reflection not least on what I consider to have been the significant changes and developments in social care in the last year, and what this new one might bring. That for me has undoubtedly been in the arena of Ai in social care. I am reflective of the journey so far, the companions who have shaped my thinking, and the ethical crossroads that lie ahead for social care in Scotland.
The rapid advance of artificial intelligence in our sector is not just a technical story; it is, at its heart, a human one. And as we look forward, I believe it is essential to ground our ambitions in the enduring values of dignity, autonomy, and justice. I offer this extended thought piece as a contribution to that ambition.
My first instinct is gratitude. The conversations and collaborations of the past year, especially with the Institute for Ethics in Ai at Oxford, the Digital Care Hub, and Casson Consulting, have given our sector a shared language for what ‘responsible’ Ai should mean in social care. Their Summit and White paper gave us a shared language for dignity, autonomy and equality in the age of automation. I am equally grateful for the practical clarity offered by the AI & Digital Regulations Service (AIDRS), which brings together NICE, MHRA, CQC and HRA in England to help health and social care organisations navigate regulation. And I acknowledge UN human‑rights experts who have insisted, I think quite rightly, that the procurement and deployment of AI must be grounded in robust human‑rights due diligence.
All this work has served to remind me that human rights and ethics is not an afterthought, but the very foundation of innovation.
Why a human‑rights approach to AI in social care is non‑negotiable
Ai is already here in adult social care. We see systems that transcribe and draft notes and care assessments, support language translation, monitor movement to reduce falls, and triage requests for support. These tools hold promise, but they also reach into the most intimate spaces of life, our homes, our relationships, our routines. That depth of reach demands a depth of ethics. Oxford’s work in our field is clear: responsible use means Ai must support and never undermine human rights, independence, choice and control, dignity, equality and wellbeing. UN guidance adds the necessary governance spine: carry out human‑rights due diligence across the Ai lifecycle, engage those most affected, and ensure access to remedy.
Regulatory currents are shifting underneath us. The EU’s Artificial Intelligence Act is now law, with staged obligations and a risk‑based structure that bans certain “unacceptable‑risk” practices (for example, systems that exploit vulnerabilities or enable unlawful mass surveillance), sets transparency duties for limited‑risk systems, and imposes stringent requirements on high‑risk uses. Even for Scottish providers, this matters: EU rules apply whenever AIisystems or their outputs are placed on the EU market or used in the EU which is an extraterritorial pull that many supply chains and software subscriptions already trigger.
In the UK, there is as yet no single Ai Act. Instead, a regulator‑led approach asks sector regulators to enforce five core principles (safety, transparency, fairness, accountability, redress) under existing law. For health and social care, AIDRS and NHS information‑governance guidance have become the de facto playbooks: proportionate assurance, safe evidence, clear data rights, and ethical oversight. In short: the EU sets a high, codified floor; the UK expects context‑based governance through existing duties. Scotland must live comfortably with both. I would particularly commend here the blog written by John Warchus which unpacks some of this hinterland.
PANEL: how I turn “rights” into decisions
In a previous article I argued that this is an ethical and rights space that benefits from the implementation of the well-recognised PANEL human rights framework. Nothing that is happening now has changed that perspective and personally I would commend the continued use of the PANEL principles: Participation, Accountability, Non‑discrimination, Empowerment, Legality, to move from slogans to practice. Let me illustrate some of this:
Participation
People who draw on care, unpaid carers and frontline staff should shape Ai before it arrives. Participation is not a survey at the end; it is co‑design at the start and dialogue throughout. The Oxford collaboration modelled this: bringing people with lived experience into the room, defining “responsible” in the language of care, not merely in the grammar of compliance. In my view, participation must include real choices about trade‑offs, say, between night‑time safety and the sanctity of the home, because these are moral choices, not merely technical settings.
Mrs M lives with frailty and prefers to sleep with her door ajar. Her care home trials a thermal‑imaging night monitor to reduce falls. In a participation session, residents and families ask: Can we disable the device in private moments? Where is the data processed? Who can review it? The home agrees to local (on‑premise) processing, automatic masking of faces, visible “privacy on” indicators, and a resident‑controlled off‑switch. The result is not the absence of technology; it is what Jenna Joseph calls consented technology.
Accountability
We need an accountability map for every deployment: who is the provider/deployer and who is the vendor/importer; who signs off risks; who investigates incidents; how can residents and staff escalate concerns; what goes into the public learning log? The EU AI Act gives practical scaffolding, namely roles, risk management, post‑market monitoring, all of which I would contend Scottish providers can adopt to make accountability legible even in a UK regime that leans on existing law.
A local authority pilots an LLM that drafts care assessments from worker dictation. After several weeks, managers notice the model confidently inserts “no risk of self neglect” where the worker had simply paused. An incident is logged; the authority’s accountability map lets a resident, a worker and the vendor meet to replay the transcript and model output. The authority introduces a “yellow banner” policy: any AI generated statement about risk must be explicitly confirmed by the social worker, or it is struck out. This example builds on some of the excellent ethical considerations espoused by Rees and Edmonson.
Non‑discrimination
Bias audits must be routine. Predictive models trained on skewed data can channel more surveillance toward some groups than others, or depress service offers without anyone intending harm. NHS ethics work emphasises countering inequalities; UN guidance insists on HRDD with those most at risk. In social care, we should publish fairness metrics, not hide them in procurement files.
A homecare provider uses an Ai scheduler. Complaints rise from women working school‑hours who get fewer paid visits. A fairness audit shows the model equates “availability” with a 12‑hour window and penalises constrained patterns. The provider resets the objective function to value continuity of relationship and paid breaks, not only distance and minutes. Complaints fall; retention rises.
Empowerment
Human rights require capability. We must make explanations understandable, options visible, and opting‑out feasible. Generative tools should lower barriers for those who do not speak English as a first language, or who want draft letters in accessible formats, without locking people into systems they cannot question. UK information‑governance guidance is clear: professionals remain decision‑makers, and individuals retain rights over their data.
Amira, recently arrived in Scotland, struggles to draft a support request. A co‑produced LLM assistant helps her write in Arabic and produces a plain‑English summary for her key worker. The service’s policy, shaped with users, states that the assistant never submits letters; Amira reads, edits and approves every word. Empowerment is the difference between a tool that writes for me and one that lets me say what I want to say.
Legality
We must separate lawful bases for direct care from those for training or improving systems. We need Data Protection Impact Assessments (DPIAs), contracts that say what data can (or cannot) be used for, and documentation that shows how we manage risk. For high‑risk uses, the EU AI Act expects a full risk‑management system and technical documentation; UK guidance expects the same good governance, anchored in GDPR and equality law.
A supplier proposes to “improve accuracy” by training on de‑identified care plans. The provider’s DPIA flags re‑identification risk for small island communities. The contract is amended: no secondary use without explicit, revocable permission from the individual; only synthetic or federated approaches may be used for model refinement; independent re‑identification testing is required before release.
The human rights we must actively protect (and how AI touches each)
When I talk about a human‑rights approach, I mean specific rights not vague goodwill.
- Dignity: The first test is always whether the person’s sense of self is enhanced or eroded. Ambient sensors and “smart” cameras risk reducing people to data points, especially older adults. Dignity is preserved when surveillance is not the default; when there are off‑switches; when people choose how technology shows up in their home.
- Privacy: Home should remain a sanctuary. Data minimisation, local processing, strict retention limits and audit trails are the ethics that make sanctuary possible. The EU AI Act’s bans and transparency duties give guardrails, but we must reach for the higher standard of trustworthiness experienced by the person.
- Autonomy and Informed Consent: Consent for direct care is not consent for model training. People should be able to say “yes” to one and “no” to the other. Explanations must be understandable, and meaningful alternatives must exist. UK IG guidance is explicit about these distinctions in care contexts.
- Equality and Non‑discrimination: Age, disability, race, language and socio‑economic status can interact with AI in harmful ways. UN experts call for HRDD with at‑risk groups; NHS programmes focus on countering inequalities. Make fairness testing public and fix disparities quickly.
- Participation and Voice: People affected by Ai must have a say. Co‑production is a right, not a courtesy. Oxford’s co‑produced guidance shows how to do this well in social care.
- Accountability and Remedy: Grievance routes should be accessible and non‑retaliatory, with independent escalation. Document AI incidents (privacy breach, discriminatory output, harm to autonomy/dignity) and publish learning. AIDRS provides pathways for when regulatory notification is required.
- Workforce right and professional judgement: AI must augment care and support work, not deskill workers or micro‑surveil them. Scheduling and productivity analytics need limits and worker voice. Oxford’s governance work and sector commentary both underline this.
Scotland’s digital and policy context: making rights operational
Scotland’s Data Strategy for Health and Social Care emphasises data ethics, interoperability and “better use of data” in preventative care, with work underway toward an Ai policy framework. This is precisely where PANEL and HRDD must be baked in, not as an afterthought, but as the organising principle for design, procurement and evaluation. The Scottish AI Alliance has moved the public sector toward greater AI transparency, and that same ambition should explicitly extend to adult social care.
Scotland’s continuing debate on social care reform and international human rights incorporation is a reminder that law matters when resources are thin and systems are stressed. As the Health and Social Care Alliance have clearly articulated Ai must never become a technical shortcut for structural under‑investment. Efficiency that erodes relationships or displaces human judgement is not progress; it is a re‑statement of the very problems we claim to solve.
On a Hebridean island, a care provider is offered a “smart home” package: video monitoring, voice analysis, predictive alerts. The team ask residents to join a design circle. The answer that emerges is: “No, unless it is privacy‑first, entirely local, and unless our crofts feel like crofts, not clinics.” The vendor re‑engineers the solution: devices process on‑device; cameras are replaced by door sensors and pressure mats; and a human call‑back is guaranteed within 10 minutes of any alert. Rights shape the technology, not the other way round.
How EU and UK legislation will impact social care in Scotland
Let me put this plainly and practically.
EU AI Act: what Scottish providers must understand accepting the comment above about best practice and legal requirement given our non-membership of the EU at present.
- Risk‑based duties: The Act bans certain uses outright (e.g., systems exploiting vulnerabilities, certain emotion‑inference and untargeted biometric scraping), requires transparency for limited‑risk tools (like chatbots), and imposes heavy governance on high‑risk systems (risk management, data governance, technical documentation, post‑market monitoring). In social care, high‑risk categories often include systems that materially influence access to services or safety, or that manage workers in consequential ways.
- Who is on the hook? The Act distinguishes providers (those who develop or place a system on the market) from deployers (those who use it), as well as importers and distributors. A Scottish care organisation using a vendor’s tool is typically a deployer but may inherit provider‑like obligations when customising or substantially modifying a system. Contracts must make these roles explicit.
- Extraterritorial pull: If your system or its outputs are used in the EU – perhaps because you serve people funded from an EU jurisdiction, or your vendor markets the same configured product into the EU- you can be brought under the Act’s scope. Care technology is global; procurement must assume cross‑border obligations.
A Scottish care home adopts a falls‑prediction platform from a vendor whose product is also sold in the Netherlands. The vendor classifies the system as high‑risk under the EU Act and asks the home to contribute to real‑world performance monitoring. The home agrees on conditions: the vendor provides the conformity assessment and post‑market monitoring plan; the home captures outcome data only with resident consent and publishes a lay summary locally. The result: the home benefits from better safety while remaining on the right side of both EU and UK expectations.
UK approach: what anchors apply in Scotland
- Principles via existing law: The UK expects regulators to apply five core principles through current frameworks rather than creating a single AI statute. For social care, this means leaning on UK GDPR, equality law, and sector IG rules, plus clear, proportionate assurance. AIDRS acts as the navigational aid across NICE, MHRA, CQC and HRA.
- Health/social care specifics: NHS information‑governance guidance clarifies consent, data minimisation, and the continuing role of professionals as decision‑makers. The NHS AI Ethics initiatives and MHRA’s “AI Airlock” are developing practical safety and evidence expectations, useful anchors even when tools are used in social care rather than the NHS.
- So what for Scotland? Health and social care are as we know devolved, but UK regulators and guidance still shape market behaviour. Scottish providers should adopt the highest applicable standard: use EU AI Act role definitions and documentation discipline, follow UK IG rules on data and consent, and align with Scotland’s Data Strategy commitments on ethics and transparency. This “belt and braces” approach reduces risk, builds public trust, and future‑proofs practice.
Practice patterns I commend:
- Co‑produce from the outset: Assemble standing forums of people who draw on care, unpaid carers and staff to review proposals, test interfaces, and agree consent flows. Treat this as governance, not engagement.
- Publish an Ai register: What systems do we use? Why? What data do they process? What is the lawful basis? Who is accountable? How can people complain? (If you only have a procurement file, you do not have accountability.)
- Bias and equity routines: Run fairness tests at go‑live and on a schedule; commission independent audits for consequential systems; correct disparities and publish fixes.
- Privacy‑by‑default architecture: Prefer local processing; minimise retention; create human‑readable logs so that residents and families can see what happened and why.
- Explainability for humans: Build short, accessible explanations; train staff to challenge the model and to document overrides; avoid any fully automated adverse decision.
- Contracts that protect rights: Require vendors to disclose training data sources, known limitations, bias mitigations, and incident response plans; prohibit secondary use without explicit permission; align role allocations with EU AI Act definitions.
- Remedy pathways: Offer simple reporting for residents and staff; classify incidents (privacy, bias, harm to dignity/autonomy); share learning in public. Use AIDRS to judge when to notify regulators.
- Worker voice and wellbeing: Set boundaries on algorithmic monitoring; involve staff and unions in tool selection; invest in digital and ethical literacy; design for augmentation, not substitution.
Closing reflection: the humanness of care
I hope you have found the above useful as we start to navigate Ai in social care in 2026 from a human rights and ethical perspective. There is a lot more to say in this conversation. But essentially social care is a human practice and art before it is a service. Ai will be judged not by how clever it seems, but by whether it helps people live the lives they choose, surrounded by relationships that dignify and sustain them. The frameworks are now at hand, the Oxford values‑led guidance, the UN’s HRDD expectations, the EU Act’s structure, and the UK’s principle‑based oversight. Our task in Scotland is to make them lived reality: to ensure that, in our homes and communities, technology becomes a tool for care, not a technique of control.
Donald Macaskill
Photo by Luke Jones on Unsplash

