5-Year Impact Factor: 0.9
Volume 34, 12 Issues, 2024
  Letter to the Editor     February 2024  

ChatGPT and the Future of Medical Education: Correspondence

By Hinpetch Daungsupawong1, Viroj Wiwanitkit2

Affiliations

  1. Private Academic and Editorial Consultant, Phonhong, Vientiane, Laos
  2. Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, India
doi: 10.29271/jcpsp.2024.02.244

Sir,

We followed the topic on “ChatGPT and the Future of Medical Education: Opportunities and Challenges.”1 This paper investigated the possible impact of artificial intelligence (AI) language models such as ChatGPT on medical education. These models can imitate talks and provide medical students and professionals with interactive learning opportunities. While there are prospects for progress, some obstacles must be overcome. One big opportunity is ChatGPT's capacity to generate genuine patient encounters, which allows trainees to practice clinical decision-making and communication skills in a safe and controlled environment. This can improve training, especially for instances that are difficult to recreate in typical educational environments. ChatGPT can also provide tailored feedback and assistance, addressing individual learning needs, and encouraging self-directed learning.

ChatGPT, with its ability to share knowledge and provide help, can overcome this divide and reach a broader audience, including students in remote places or with little resources. However, there are several issues that must be addressed. One significant problem is the requirement for precise and dependable medical information. ChatGPT and other AI language models learn from massive volumes of data, including online sources that may contain mistakes or outdated information. It is critical to ensure that the model delivers evidence-based and up-to-date medical knowledge in order to avoid the spread of inaccurate or harmful information. Another issue is that AI language models lack emotional intelligence and human connection.

Medicine entails more than simply knowledge; it also entails understanding, compassion, and establishing connection with patients. ChatGPT may struggle to imitate these components of patient interactions, which are critical for providing comprehensive healthcare. To keep the human touch in medical education, it is critical to combine AI models with instruction in interpersonal skills and humanistic methods. Ethical considerations are also essential. When employing AI language models to simulate patient contacts, patient privacy and confidentiality must be preserved. To maintain patient trust and ensure the ethical use of AI in medical education, robust data security measures and adherence to privacy legislation are required.

Finally, ChatGPT and other AI language models have the potential to transform medical education by enabling interactive learning experiences and increasing access to high-quality instructional resources. However, difficulties with accuracy, emotional intelligence, and ethical problems must be properly addressed. AI language models can contribute to the future of medical education by utilizing possibilities and tackling problems, boosting the knowledge and abilities of healthcare workers while keeping the crucial human elements of patient care.

A large training set and contemporary techniques are necessary to remove bias and errors from chatbots.2,3 This is due to the fact that relying only on a huge data source can lead to problems. The usage of chatbots poses ethical considerations because they may have unforeseen or unwanted repercussions. As AI language models advance, ethics controls and constraints must be imposed to avoid the spread of harmful ideas and incorrect information.

COMPETING INTEREST:
The  authors  did  not  declare  any  conflict  of  interest

AUTHORS’ CONTRIBUTION:
HD: Generated the idea, wrote, analysed and approved the manuscript.
VW: Generated the idea, supervised and approved the draft. All  authors  approved  the  final  version  of  the  manuscript  for  publication.

REFERENCES

  1. Shaheen A, Azam F, Amir M. ChatGPT and the future of medical education: Opportunities and challenges. J Coll Physicians Surg Pak 2023; 33(10):1207. doi: 10.29271/ jcpsp.2023.10.1207.
  2. Kleebayoon A, Wiwanitkit V. Artificial Intelligence, chatbots, plagiarism and basic honesty: Comment. Cell Mol Bioeng 2023; 16(2):173-4. doi: 10.1007/s12195-023-00759-x.
  3. Kleebayoon A, Wiwanitkit V. Comment on "How may Chat-GPT impact medical teaching?" Rev Assoc Med Bras (1992) 2023; 69(8):e20230593. doi: 10.1590/1806-9282.20230593.

Authors Reply Section

By Fahad Azam

Affiliations

  1. Dr. Fahad Azam, Department of Pharmacology, Shifa College of Dentistry, Shifa College of Medicine, Shifa Tameer-e-Millat University, Islamabad, Pakistan


AUTHORS’ REPLY

Sir,

The thoughtful insights by the authors add depth to the discourse on the potential impact of artificial intelligence (AI) language models, particularly ChatGPT, on medical education. We fully acknowledge and appreciate the highlighted opportunities and challenges regarding the integration of ChatGPT into medical training. The emphasis on the potential for ChatGPT to simulate genuine patient encounters and provide tailored feedback aligns with our aim to enhance interactive learning experiences for medical students and professionals. Indeed, replicating challenging clinical scenarios in a safe and controlled environment can significantly augment the training outcomes.

The concerns regarding the accuracy, reliability, and validity of medical information provided by AI language models are crucial. Different fields of medicine such as diagnostic radiology are witnessing remarkable progress in the application of AI, augmented reality, and virtual reality simulation systems into the clinical practice.1 However, we agree that meticulous ongoing validation and verification of the training dataset are vital to ensure that AI disseminates evidence-based and up-to-date medical knowledge, preventing any dissemination of inaccurate or harmful information. Additionally, issues related to data security, patient safety, ethical implications, and the potential for bias in AI algorithms need careful consideration.

Furthermore, the emphasis on the human aspects of medicine, including understanding, compassion, and the physician-patient relationship, is highly valid. AI language models like ChatGPT may fall short in replicating these essential elements of healthcare.2 We concur that integrating interpersonal skills and human touch into medical education is imperative to complement the strengths of AI models and maintain humanistic values in patient interactions.

We wholeheartedly agree that the evolution of AI language models must be accompanied by stringent ethics controls and guidelines. Preserving patient privacy and confidentiality, as well as adhering to robust data security measures, encryption, access controls, and regular vulnerability assessments are paramount in the integration of AI language models in healthcare organisations.3



 

We would like to add that comprehensive ethical guidelines on humanistic, legal, algorithm, and informational perspectives are required to effectively address these distinct challenges.4 Given the prevalent use and overreliance of today’s learners on AI models, integrating these ethical perspectives into all educational curricula is a fundamental requirement of contemporary education.

In summary, we appreciate the insightful response and the depth it adds to the ongoing conversation about the integration of AI, particularly ChatGPT, in medical education. The added perspectives will undoubtedly contribute to refining the implementation, responsible use and ethical considerations associated with this transformative technology.
 

REFERENCES

  1. von Ende E, Ryan S, Crain MA, Makary MS. Artificial intelli-gence, augmented reality, and virtual reality advances and applications in interventional radiology. Diagnostics (Basel) 2023; 13(5):892. doi: 10.3390/diag nostics13050892.
  2. Coghlan S. Robots and the possibility of humanistic care. Int J Soc Robot 2022; 14(10):2095-108. doi: 10.1007/ s12369-021-00804-7.
  3. El Emam K, Jonker E, Arbuckle L, Malin B. A systematic review of re-identification attacks on health data. PLoS One 2011; 6(12):e28071. doi: 10.1371/journal.pone.0028071.
  4. Wang C, Liu S, Yang H, Guo J, Wu Y, Liu J. Ethical consi-derations of using ChatGPT in health care. J Med Internet Res 2023; 25:e48009. doi: 10.2196/48009.