According to a new study, artificial intelligence is greater than that of doctors in assessing eye problems.
The clinical wisdom and reasoning talents of this ever-improving generation are already reaching the point of specialist ophthalmologists, say scientists at the University of Cambridge. GPT-4, a “large AI language model,” has been tested with doctors at stages of their careers, adding entry-level non-specialist doctors as well as trainee and trained ophthalmologists.
Each was presented with a series of 87 patient scenarios involving an express eye challenge and asked to make a diagnosis or give treatment recommendations by choosing from 4 options. GPT-4 scored “significantly better” on the test than young non-specialist doctors. , who are comparable to general practitioners (GPs) in their specialist knowledge of ophthalmology.
The results, published in the journal PLOS Digital Health, also showed that GPT-4 scored for trainee and trained ophthalmologists, although better-performing doctors scored higher. The Cambridge study team says giant language models are unlikely to update healthcare professionals, however, it may simply improve healthcare as a component of clinical workflow.
The researchers believe that large language models, such as GPT-4, may be useful for offering eye advice, diagnosis, and control advice in “well-controlled environments,” such as patient triage or limited access to specialized health professionals. Dr Arun Thirunavukarasu, one of the study’s authors, said: “Realistically, we can implement AI to classify patients with eye disorders and decide which cases are emergencies that should be detected immediately by a specialist, which can be detected by a GP and which are not. They want treatment.
“It’s possible that the models simply stick to transparent algorithms that are already in use, and we found that GPT-4 is just as effective as medical experts in treating eye symptoms and symptoms in answering more complex questions. With further development, giant language models can also advise general practitioners struggling to get quick recommendations from ophthalmologists. In the UK, other people are waiting longer than ever for eye care.
“Large volumes of clinical text are needed to help refine and scale up those models, and work is underway around the world to facilitate this. “The team says the studies are “superior” to previous studies because they compare the AI’s functions with those of practicing doctors, rather than sets of control results.
Dr Thirunavukarasu, now an NHS Foundation Trust academic physician at Oxford University Hospitals, said: “Doctors don’t review exams throughout their career. We wanted to see how the AI behaved when confronted with wisdom and talent on the spot. of practicing physicians, to provide a fair comparison.
He added: “We also want to characterise the functions and limitations of commercially available models, as patients may already be using them, rather than the internet, to ask for advice. “
The test included questions about various eye fitness situations (adding excessive sensitivity to light, decreased vision, injury, itching and eye pain) from a textbook used to assess ophthalmology students. The manual is not freely available on the internet, so it is unlikely. that its content was included in GPT-4’s educational datasets.
Dr Thirunavukarasu said: “Even considering the long-term use of AI, I believe that doctors will continue to live up to patient care. The most important thing is to empower patients to make a decision about whether they need IT systems to worry about or not. “It will be an individual decision that each patient will have to make. GPT-4 and GPT-3. 5 or “Pre-Trained Generative Transformers”: These are trained on datasets containing billions of words from articles, books, and other internet sources. .
GPT-4 powers the online chatbot ChatGPT to provide “tailor-made” answers to human queries. ChatGPT has recently gained a lot of attention in medicine for achieving passing on medical school exams and for delivering more accurate and empathetic messages than reacting human doctors. to patient consultations.
The researchers noted that the set of large, artificially intelligent language models is evolving “very rapidly” and that, since the study, more complex models have been published, which may be even closer to the goal of expert ophthalmologists.
Get email updates on the day’s highlights