A doctor’s vital function may be threatened by artificial intelligence, which can be a beneficial scientific instrument.
I gained a comparable kind of intellect throughout medical school because my brain was inundated with a billion factoids, which created a tapestry of knowledge. The details of nephrology, which I might use to my advantage when I had a patient with a bad kidney or the physiology of a failing heart when my patient’s lungs filled with fluid, were buried in my unconscious mind.
Yet, computerised artificial intelligence has different uses in medicine. The foundation of artificial intelligence (AI) is a more accurate pattern identification from retinal examination of diseases such as diabetes or heart disease before they occur, or even the type of mutations of a glioma (one of the worst types of brain tumours), while surgery is still being performed.
Pre-cancerous stem cells in the blood have also been studied in recent studies, along with other variables that AI can monitor to aid in diagnosis and treatment.
Yet, AI will never be able to replace my years of experience-based clinical judgement or my empathy for my patients. AI poses a growing threat to that.
The line separating a computer from a doctor is readily crossed.
To teach surgeons that the line between doctor and computer is too readily blurred in a way that could intimidate or even jeopardise a surgeon’s abilities, one only needs to look at a study recently published in Nature Biomedical Engineering that examined intraoperative video surveillance.
What about the risk of malpractice? What stops your patient from suing you and using the AI suggestions as support if you are a radiologist, dermatologist, or surgeon who chooses to disagree with the AI feed based on years of clinical judgement and experience and you are later shown to be incorrect?
This may deter clinicians from defying AI recommendations for therapy, even when their judgement suggests they should.
Remember that AI can only provide you with generic solutions when used in clinical care. It cannot understand the specifics of your situation or past.
The popular new AI chatbot ChatGPT responds to queries from users. It is already being used by patients for medical guidance.
I worry about how patients will use AI guidance.
This development has Kohane giddy, but it worries me a great deal. While he is correct that my ability to spend face-to-face time with patients in the office is constrained, particularly due to the documentation requirements for electronic health records, the solution is most definitely not post-visit consultations with artificial intelligence, as these could easily provide information that harms rather than benefits a patient.
Another AI expert, Dr. Maia Hightower, the chief digital and technology officer at University of Chicago Medicine, drew attention to the expanding use of AI as an administrative tool in the murky interface between medical professionals, patients, and insurance companies in the same New England Journal of Medicine article.
Thus, we frequently use bots or automation to transport information from the health system to the insurance business and back in order to engage with payers and our insurance companies “explained Hightower. “We are aware that insurance companies frequently employ AI algorithms to determine whether or not to cover a specific prescription or test when requesting prior permission for operations. And as a provider company, we don’t always act transparently in certain situations.
This bothers me greatly as a practising internist because I can see a time when personalised medicine is supplanted by algorithms and insurance coverage battles worsen than they already are. What’s to stop insurance companies from replacing me with a less expensive, more reliable AI robot that only practises medicine’s science?
AI will at most resemble a commercial jet’s autonomous pilot. Although technology can aid in flight, passengers still prefer a real pilot in the cockpit who is prepared to take over in an emergency.