Should AI inform - or replace - a clinician’s clinical decisions?
- 16 August, 2018 09:59
People are using AI-powered tools everyday, without even knowing it.
Think of the smartphone’s autocorrect (if error prone) feature, or your email provider’s spam filter.
But should people balk when they realise AI is being used not only to help diagnose a condition, but also replace or inform a clinician’s clinical decision?
His paper identifies the complexity of the subject, looks at current trends and paints a possible future.
AI doesn’t have awareness, empathy, feelings and emotions – and this is what differentiates a health practitioner from a machine, a system, an algorithm
Ungureanu identifies artificial intelligence as the use of software to mirror human cognitive functions with possible outcomes of having similar or better abstract reasoning capabilities and merging software with bio-hardware to exceed our human abilities.
He points out, the implications of AI as decision makers can be significant, with many ramifications, including legal and ethical.
However, the implications of AI not being allowed to take decisions can be equally significant and potentially result to loss of life, and impact health outcomes, he states.
Ungureanu’s paper notes widespread use of AI outside healthcare. These range from optimising energy flows to result in renewable energy production, to cyber forensics, and in tackling issues in complex sectors such as engineering and construction.
The defence sector is in more advanced stages of AI adoption, he states.
While humans can cope with multiple attacks, they have no ability to cope with thousands. Once again, this builds down to AI being used to gather, capture, analyse and synthesise information, and if required, act on it under the supervision of a human operator, he writes.
But how about transposing these advantages in healthcare?
Ungureanu points out AI is already being used in this sector.
AI is present in devices capable of error correction on data captured, performing repetitive tasks with the added differentiator that it learns and adapts as such improving its outcomes quality, to assisting surgeries and alerting on anomalies.
It can be found, he says, in manipulating vast amounts of data and making sense of it, so that “a human can interpret the information and take actions”.
AI powered robots that can support daily living activities or can interact and be a companion to those in need are already impacting, for the better, health outcomes.
From optimisation of resources to personalised healthcare, AI is here and is here for the long run, says Ungureanu.
But there are challenges ahead.
“Like all innovative technologies that have the potential to be a significant disruptor, deriving benefits from AI is a journey that covers process automation to insights and intelligence and requires healthcare organisations to overcome barriers such as quality of data or workforce transformation.”
As AI reduces the amount of time clinicians need to sift through vast amounts of data and discern between useful or useless information, it can only lead to more face to face time with patients, less resources wasted and an optimised healthcare system, he states.
But could - or should - AI replace or inform a clinician’s clinical decisions?
Based on his research, he concludes: “It can, and it should inform; it can and it should support; it can enable, it can prompt, it can flag flaws, but it can’t and it shouldn’t replace a clinician’s decision.”
AI is not a consciousness, he stresses.
“It doesn’t have awareness, it has no empathy, no feelings and no emotions – and this is what differentiates a health practitioner from a machine, a system, an algorithm.
“It has not gone through the ‘test of time’ to be verified and validated,” he adds.
AI is based on developers’ code and algorithm development with biases that are not necessarily relevant to the care setting that they operate in.
“To have an AI conclude, take a decision or take an action for or on behalf of a patient, it would imply that the patient has taken the decision,” he states.
He therefore calls on further study and development of testing and validation protocols for narrow and general AI.
Similar to the ‘Imitation Game’ developed by Alan Turing to evaluate attainment of intelligence by machines, we require statistically sound methods and protocols to validate the algorithms, the AI that will hold someone’s life in its hands, he states.
“Until general AI has awareness, consciousness and suffers consequences for its actions, I conclude that no AI should replace a clinician’s decision,” he writes.
“And if this is not scary enough, the day when we trust an AI to save someone’s life…would this lead to weaponising AIs?”
“The question of ‘then what?’ rises from the thought that natural intelligence could one day be the less advanced intelligence.”
Get the latest on digital transformation: Sign up for CIO newsletters for regular updates on CIO news, career tips, views and events.