Journal of the College of Physicians and Surgeons Pakistan
ISSN: 1022-386X (PRINT)
ISSN: 1681-7168 (ONLINE)
Affiliations
doi: 10.29271/jcpsp.2025.12.1626ABSTRACT
Artificial intelligence (AI) tools have been integrated into medical research and writing at a rapid pace since ChatGPT was launched in November 2022. This development has created unprecedented opportunities for efficiency and accessibility in research and writing. This viewpoint examines the potential benefits and risks associated with the adoption of AI tools, particularly among medical students and early-career and healthcare researchers. The authors discuss that the uncritical use of AI can potentially lead to superficial learning and compromise the development of essential critical thinking skills. Effective use of AI tools requires background knowledge, as illustrated by examples in research question generation, literature review, data interpretation in clinical trials, and manuscript preparation. The authors emphasise the value of traditional skills, such as critical analysis, in-depth reading, and independent literature search in medical professions and suggest strategies for the ethical and effective integration of AI tools in research workflows, with a focus on building a strong foundation of knowledge before relying on these tools. This study offers some recommendations for educators and senior researchers in guiding the next generation of medical professionals. There is a need for collaboration and dialogue among all key stakeholders to ensure that AI tools enhance, rather than diminish, the quality and integrity of medical research and education.
Key Words: Artificial intelligence, Medical writing, Medical research.
Since the public launch of ChatGPT in November 2022, the integration of artificial intelligence (AI) tools into medical research and writing has increased. This development has the potential to revolutionise the traditional approaches to conducting medical research and writing scientific papers. These tools can be used for brainstorming ideas, conducting literature reviews, performing data analysis, creating outlines, generating content, and serving as personalised tutors.1 Some of the benefits of AI integration include increased efficiency in information processing, improved accessibility to vast amounts of medical knowledge, and the ability to handle complex data analysis tasks rapidly. For example, AI-powered literature review tools can quickly scan thousands of papers, identifying relevant studies and summarising key findings.2 AI can significantly enhance the clarity, style, and coherence of scientific writing, assisting researchers who are non-native English speakers in effectively conveying their research.3
This widespread availability and rapid adoption of AI tools offer unique opportunities to enhance efficiency and accessibility in medical research and writing.4 However, there is a possibility that novice researchers and students, without an in-depth understanding of their field and subject, might rely excessively on AI outputs. This may result in limited learning and affect the development of critical thinking skills necessary for medical research. Experts are concerned about the surge in fake scientific articles through the current AI language models. Despite appearing sophisticated, these papers may contain semantic inaccuracies detectable by expert readers.5
This viewpoint discusses the potential risks associated with unplanned AI integration in medical research and writing, particularly by students, early-career professionals, and young researchers. The authors aim to highlight the importance of using this technological innovation with a strong subject knowledge, development of fundamental skills, and critical thinking abilities. Some specific examples are presented to highlight the importance of adequate background knowledge as an essential prerequisite for the effective use of AI tools. There is a possibility that excessive dependence on AI tools might result in low-quality information or incorrect or fabricated data, which is often referred to as AI hallucinations.6 These hallucinations occur when AI generates content that appears plausible but is inaccurate, misleading, or entirely untrue. For example, an AI-powered literature review tool may produce citations that do not exist or misrepresent research findings, potentially leading researchers to rely on faulty evidence.
Tech-Savvy Generation and AI Accessibility:
The widespread adoption of AI tools in medical research and writing has been facilitated by their increasing accessibility and the tech-savvy nature of the younger generation.7 Many of these tools are either freely available, have a freemium version, or can be easily integrated into existing research platforms. This makes them easily accessible to students and early-career researchers.
The comfort level of younger generations with technology has led to the enthusiastic adoption of these tools. This widespread use reflects the potential of AI to transform medical education and research practices.
Risk of Superficial Learning and the Role of Background Knowledge:
While there are many benefits of AI tools, we are also concerned about their potential to promote superficial learning when these tools are used uncritically. The ease with which AI can generate content or analyse data is very tempting and may lead to an over-reliance on these tools.8 This can potentially reduce the emphasis on reading, critical appraisal, and independent literature search skills.
The following are the specific examples related to different steps of medical research and writing. These examples will demonstrate how overreliance on AI, particularly among early-career professionals and students, can adversely affect creativity and result in blind acceptance of AI-generated output due to lack of background subject knowledge and research skills.
Generating Research Questions and Ideas:
A novice researcher without background knowledge and extensive reading on the topic might not have the expertise to identify which questions are truly novel, feasible, or ethically appropriate to pursue. This could lead to pursuing research questions that are either already well-studied, impractical to investigate, or not aligned with current ethical standards in medical research. Only having the ability to generate ten research questions by a tech-savvy researcher without background knowledge will be of little use and may even be counterproductive.
AI-Powered Literature Review:
While the AI tools can quickly identify and summarise existing studies, a researcher without background knowledge may fail to critically evaluate the quality and relevance of the included studies and blindly rely on the AI-generated results. This could result in a literature review that excludes significant research gaps and includes outdated or non-credible sources or even fake citations.9
Data Interpretation:
AI tools can create results, graphs, and other outputs. However, a researcher without a background understanding of biostatistics and clinical trial design may misinterpret these findings. They may not be able to differentiate clinically relevant results from statistically significant results. They may not recognise confounding variables, biases, or limitations inherent in the study. This can result in incorrect conclusions being drawn from the data, potentially affecting patient care, and leading to flawed recommendations in clinical practice.
Assistance with Manuscript Writing:
An early-career researcher relying solely on the AI-generated output may not be able to develop scientific writing skills. In addition, there is a possibility that the researcher may miss essential elements of scientific writing, such as the accurate description of methodology, the appropriate discussion of results in the context of existing literature, and the identification of study limitations. AI tools may also fail to adhere to specific journal guidelines without human oversight. Although the manuscript may appear well-written, it can be scientifically weak, lacking the critical insights and in-depth analysis required in academic publishing. In addition, directly copying and pasting AI-generated text can result in plagiarism,10 potentially leading to manuscript rejection and a loss of credibility for the researcher.
These examples highlight that the use of AI tools without a strong background knowledge and skills can lead to a superficial understanding and misuse of AI output, potentially compromising the depth and quality of research and learning.
Importance of Skills and Knowledge in the Medical Profession:
Despite the recent AI advances, traditional skills such as reading, writing, retaining important information, critical analysis, and verbal expression are of prime importance for healthcare professionals. These skills form the foundation of effective patient care, professional communication, and the advancement of medical knowledge. Developing such important skills requires substantial efforts, time, and dedication on the part of learners. It may include an extensive literature review, in-depth reading, critical appraisal of evidence, conducting fieldwork, data analysis, interpretation of results, and writing a manuscript. All these activities contribute to a deeper and nuanced understanding of the subject, as well as personal and professional growth, that cannot be replicated through reliance on AI tools alone.
Guidance for Early-Career Professionals and Students:
While highlighting the potential risks, it is also important to acknowledge the value of large language models and other AI tools in medical research and writing. When used ethically and critically with human oversight, these tools can significantly enhance the process of research writing.
For early-career researchers and students, it is important to develop a solid foundation of knowledge before heavily relying on AI tools. These tools should be considered and used as research assistants that augment human intelligence rather than replace it.
The following strategies are suggested for the effective and ethical use of AI tools in research and learning:
The ongoing rapid integration of AI tools in research and writing provides great opportunities as well as significant challenges. These tools offer remarkable capabilities in data analysis, infor- mation synthesis, and content generation for young resear-chers and students. However, their blind use risks promoting superficial learning and compromising the development of essential research skills.
It is important for early-career researchers and students to first develop a knowledge base (by reading and critical appraisal of literature) and learn the essential research skills (research methodology and basic biostatistics) before integrating AI tools in their research workflow. In addition, they must use these tools in an ethical and transparent manner as a research assistant instead of an academic crutch.
Educationists, faculty members, and senior researchers also have a responsibility to guide the next generation of researchers in the ethical and appropriate use of AI tools. This will ensure that they develop critical analysis skills and a deeper understanding necessary for excellence in medical practice and research. All- important stakeholders, including faculty members, medical educationists, researchers, students, and administrators, should engage in an open and ongoing dialogue about the ethical and effective use of AI in medical research.
COMPETING INTEREST:
The authors declared no conflict of interest.
AUTHORS’ CONTRIBUTION:
FAR, FF: Conceived the idea, performed the literature review, wrote the first draft, approved the manuscript, and took respon- sibility for the content.
Both authors approved the final version of the manuscript to be published.
REFERENCES