Medical accountability in the digital health era: the pros and the pitfalls


Integration of technologies into clinical practice has introduced new, hitherto unknown challenges. Photograph used for representational purposes only

Integration of technologies into clinical practice has introduced new, hitherto unknown challenges. Photograph used for representational purposes only
| Photo Credit: Getty Images

The doctor was trusted and dominant. The patient was uninformed, grateful and believed in fate. It was presumed that good intentions, training and facilities led to good results. Amicability was mistaken for quality control. If something went wrong, patients often would not know, and even if they did, would not complain. This was in the mid-1970s. We were accountable only to our conscience.

In 2025, in the Digital Health (DH) era, there are no presumptions. The complexity of modern healthcare increases opportunities for error. A superspecialist is, after all, only a mouse click away. Today’s doctor deals with quality control, audits, protocols, regulations, guidelines, frequent inspections and Artificial Intelligence (AI)

The well-informed patient is a consumer– a negotiator buying a product, calling the shots, playing an important role in changing the system. Continuous Quality Improvement techniques used in monitoring industrial processes are now used in tracking patient care. Hospitals, like factories, want the end product – cured patients – to be a Sigma Six. Should Deming’s philosophy, ‘customers who use a product, should have a say in its design’ be applied to healthcare?

In a play, actors recite lines written for them. In a musical, singers cannot choose their own tunes. The hackneyed phrase “clinical judgement” is giving way to standardisation of care, to achieve consistency and predictability. Will I be held accountable, if a standard, approved, evidence-based algorithm is not implemented? Care received may not always adhere to scientific evidence. A doctor may follow the results of one study, disregard the findings of a second, and be unaware of a third. Doing the right thing and doing it right, are today’s buzzwords .

Errors and liability

Medical accountability in the DH era, should not be viewed as displacement from the pedestal one has been sitting on for centuries. Integration of technologies into clinical practice has introduced new, hitherto unknown challenges. Technology-enabled radical transformation outpaces legal and regulatory frameworks. Premature regulation could however, stifle innovation and competitiveness. When a digital tool contributes to a medical error, quantifying shared responsibility among clinicians, technology providers and healthcare institutions would depend on contextual interpretation. A clinician may not even understand how a system translates input data, into output decisions and cannot exercise direct control over recommendations generated by a system. Malpractice claims, are judged against “customary medical practice”, which is just evolving, in the DH era. Moral culpability is different from legal liability.

DH tools empower patients by providing real-time access to their health data. This facilitates better self-management, shifting some responsibility to patients. Adverse outcomes may arise due to user error (eg. misuse of wearables). Clear documentation and education are essential to mitigate liability risks for healthcare providers and manufacturers.

The legislative and regulatory framework in India, with reference to liability when using technology in healthcare, has significant gaps. Frequent addendums are necessary to clarify real-world concerns. Version 2.0 of the Telemedicine Practice Guidelines, initially notified in March 2020 is a step in the right direction. We must be future ready. Defective equipment and medical devices are subject to laws governing product liability. The European Union’s revised Product Liability Directive (PLD) expands accountability to include software and AI components, embedded in medical devices. Manufacturers are liable for harm caused by defective updates and evolving algorithms.

Medical accountability in AI-assisted healthcare

Lack of clarity in AI accountability, could delay its adoption. Some AI systems take inputs and generate outputs without disclosing underlying measurements or reasoning – the black-box problem. AI systems per se can contribute to unexpected adverse outcomes. To avoid healthcare practitioners from being held accountable, implementation of standardised policies is essential. Appropriate legislation is necessary to allow apportionment of damages. Are existing anti-discrimination and human rights laws sufficient to address the problem of algorithmic bias, due to which the AI algorithm produced a poor outcome in a historically disadvantaged group to which the patient belonged? Would the clinician’s defence lawyer be able to prove this in the first place? The law is generally interpreted contextually. Perceptions vary among patients, clinicians and the legal system, and at different times.

Fear of accountability galvanises us to recognise critical gaps between current and desired results and take ownership to close those gaps. Being accountable for one’s behaviour is part of growing up. Interpretation of the law differs, depending on many variables. Pinpointing accountability in the DH era is a grey area, unlikely to be resolved soon. DH is not mathematics. It will never ever be black or white. It will always be various shades of grey. Ultimately we will resort to the centuries old judicial cliches caveat emptor – let the buyer beware and res ipsa loquitur – the thing speaks for itself.

(Dr. K. Ganapathy is a distinguished professor at the Tamil Nadu Dr. MGR Medical University and past president of the Neurological Society of India and the Telemedicine Society of India. [email protected])



Source link

Leave a Comment

Scroll to Top
Receive the latest news

Subscribe To Our Weekly Newsletter

Get notified about new articles