5-Year Impact Factor: 0.9
Volume 35, 12 Issues, 2025
  Viewpoint     December 2025  

AI Tools in Medical Research and Writing: Balancing Innovation with Critical Thinking among Young Researchers and Students

By Farooq Azam Rathore1, Fareeha Farooq2

Affiliations

  1. Department of Rehabilitation Medicine, Quetta Institute of Medical Sciences, Quetta, Pakistan
  2. Department of Biochemistry, Quetta Institute of Medical Sciences, Quetta, Pakistan
doi: 10.29271/jcpsp.2025.12.1626

ABSTRACT
Artificial intelligence (AI) tools have been integrated into medical research and writing at a rapid pace since ChatGPT was launched in November 2022. This development has created unprecedented opportunities for efficiency and accessibility in research and writing. This viewpoint examines the potential benefits and risks associated with the adoption of AI tools, particularly among medical students and early-career and healthcare researchers. The authors discuss that the uncritical use of AI can potentially lead to superficial learning and compromise the development of essential critical thinking skills. Effective use of AI tools requires background knowledge, as illustrated by examples in research question generation, literature review, data interpretation in clinical trials, and manuscript preparation. The authors emphasise the value of traditional skills, such as critical analysis, in-depth reading, and independent literature search in medical professions and suggest strategies for the ethical and effective integration of AI tools in research workflows, with a focus on building a strong foundation of knowledge before relying on these tools. This study offers some recommendations for educators and senior researchers in guiding the next generation of medical professionals. There is a need for collaboration and dialogue among all key stakeholders to ensure that AI tools enhance, rather than diminish, the quality and integrity of medical research and education.

Key Words: Artificial intelligence, Medical writing, Medical research.

Since the public launch of ChatGPT in November 2022, the integration of artificial intelligence (AI) tools into medical research and writing has increased. This development has the potential to revolutionise the traditional approaches to conducting medical research and writing scientific papers. These tools can be used for brainstorming ideas, conducting literature reviews, performing data analysis, creating outlines, generating content, and serving as personalised tutors.1 Some of the benefits of AI integration include increased efficiency in information processing, improved accessibility to vast amounts of medical knowledge, and the ability to handle complex data analysis tasks rapidly. For example, AI-powered literature review tools can quickly scan thousands of papers, identifying relevant studies and summarising key findings.2 AI can significantly enhance the clarity, style, and coherence of scientific writing, assisting researchers who are non-native English speakers in effectively conveying their research.3

This widespread availability and rapid adoption of AI tools offer unique opportunities to enhance efficiency and accessibility in medical research and writing.4 However, there is a possibility that novice researchers and students, without an in-depth understanding of their field and subject, might rely excessively on AI outputs. This may result in limited learning and affect the development of critical thinking skills necessary for medical research. Experts are concerned about the surge in fake scientific articles through the current AI language models. Despite appearing sophisticated, these papers may contain semantic inaccuracies  detectable  by  expert  readers.5

This viewpoint discusses the potential risks associated with unplanned AI integration in medical research and writing, particularly by students, early-career professionals, and young researchers. The authors aim to highlight the importance of using this technological innovation with a strong subject knowledge, development of fundamental skills, and critical thinking abilities. Some specific examples are presented to highlight the importance of adequate background knowledge as an essential prerequisite for the effective use of AI tools. There is a possibility that excessive dependence on AI tools might result in low-quality information or incorrect or fabricated data, which is often referred to as AI hallucinations.6 These hallucinations occur when AI generates content that appears plausible but is inaccurate, misleading, or entirely untrue. For example, an AI-powered literature review tool may produce citations that do not exist or misrepresent research findings, potentially leading researchers to rely on faulty evidence.

Tech-Savvy Generation and AI Accessibility:

The widespread adoption of AI tools in medical research and writing has been facilitated by their increasing accessibility and the tech-savvy nature of the younger generation.7 Many of these tools are either freely available, have a freemium version, or can be easily integrated into existing research platforms. This makes them easily accessible to students and early-career researchers.

The comfort level of younger generations with technology has led to the enthusiastic adoption of these tools. This widespread use reflects the potential of AI to transform medical education and research practices.

Risk of Superficial Learning and the Role of Background Knowledge:

While there are many benefits of AI tools, we are also concerned about their potential to promote superficial learning when these tools are used uncritically. The ease with which AI can generate content or analyse data is very tempting and may lead to an over-reliance on these tools.8 This can potentially reduce the emphasis on reading, critical appraisal, and independent literature search skills.

The following are the specific examples related to different steps of medical research and writing. These examples will demonstrate how overreliance on AI, particularly among early-career professionals and students, can adversely affect creativity and result in blind acceptance of AI-generated output due to lack of background subject knowledge and research skills.

Generating Research Questions and Ideas:

A novice researcher without background knowledge and extensive reading on the topic might not have the expertise to identify which questions are truly novel, feasible, or ethically appropriate to pursue. This could lead to pursuing research questions that are either already well-studied, impractical to investigate, or not aligned with current ethical standards in medical research. Only having the ability to generate ten research questions by a tech-savvy researcher without background knowledge will be of little use and may even be counterproductive.

AI-Powered Literature Review:

While the AI tools can quickly identify and summarise existing studies, a researcher without background knowledge may fail to critically evaluate the quality and relevance of the included studies and blindly rely on the AI-generated results. This could result in a literature review that excludes significant research gaps and includes outdated or non-credible sources or even fake citations.9

Data Interpretation:

AI tools can create results, graphs, and other outputs. However, a researcher without a background understanding of biostatistics and clinical trial design may misinterpret these findings. They may not be able to differentiate clinically relevant results from statistically significant results. They may not recognise confounding variables, biases, or limitations inherent in the study. This can result in incorrect conclusions being drawn from the data, potentially affecting patient care, and leading to flawed recommendations in clinical practice.

Assistance with Manuscript Writing:

An early-career researcher relying solely on the AI-generated output may not be able to develop scientific writing skills. In addition, there is a possibility that the researcher may miss essential elements of scientific writing, such as the accurate description of methodology, the appropriate discussion of results in the context of existing literature, and the identification of study limitations. AI tools may also fail to adhere to specific journal guidelines without human oversight. Although the manuscript may appear well-written, it can be scientifically weak, lacking the critical insights and in-depth analysis required in academic publishing. In addition, directly copying and pasting AI-generated text can result in plagiarism,10 potentially leading to manuscript rejection and a loss of credibility for the researcher.

These examples highlight that the use of AI tools without a strong background knowledge and skills can lead to a superficial understanding and misuse of AI output, potentially compromising the depth and quality of research and learning.

Importance of Skills and Knowledge in the Medical Profession:

Despite the recent AI advances, traditional skills such as reading, writing, retaining important information, critical analysis, and verbal expression are of prime importance for healthcare professionals. These skills form the foundation of effective patient care, professional communication, and the advancement of medical knowledge. Developing such important skills requires substantial efforts, time, and dedication on the part of learners. It may include an extensive literature review, in-depth reading, critical appraisal of evidence, conducting fieldwork, data analysis, interpretation of results, and writing a manuscript. All these activities contribute to a deeper and nuanced understanding of the subject, as well as personal and professional growth, that cannot be replicated through reliance on AI tools alone.

Guidance for Early-Career Professionals and Students:

While highlighting the potential risks, it is also important to acknowledge the value of large language models and other AI tools in medical research and writing. When used ethically and critically with human oversight, these tools can significantly enhance the process of research writing.

For early-career researchers and students, it is important to develop a solid foundation of knowledge before heavily relying on AI tools. These tools should be considered and used as research assistants that augment human intelligence rather than replace it.

 The following strategies are suggested for the effective and ethical use of AI tools in research and learning:

  1. Develop a strong background subject knowledge through traditional study methods before incorporating AI tools in the research and writing workflow.
  2. Use AI-generated content as a starting point or for brain- storming ideas for further investigations. This content should not be used or considered as a final product.
  3. Always critically evaluate AI outputs by cross-referen- cing with reputable sources and own knowledge.11
  4. Be transparent about the use of AI tools in research and writing. Authors and researchers should ideally keep a record of the use of AI tools in their research project and declare it as per the publishers’ guidelines or instructions of the journal whenever required.12
  5. Stay informed about the latest developments in AI ethics and best practices in the use of AI tools for research and writing.13 Always acknowledge and declare the use of AI tools as per the instructions of the journal.
  6. Ensure that data privacy and security concerns are ade-quately addressed when utilising AI tools, particularly in handling sensitive research data or patient information.

The ongoing rapid integration of AI tools in research and writing provides great opportunities as well as significant challenges. These tools offer remarkable capabilities in data analysis, infor- mation synthesis, and content generation for young resear-chers and students. However, their blind use risks promoting superficial learning and compromising the development of essential  research  skills.

It is important for early-career researchers and students to first develop a knowledge base (by reading and critical appraisal of literature) and learn the essential research skills (research methodology and basic biostatistics) before integrating AI tools in their research workflow. In addition, they must use these tools in an ethical and transparent manner as a research assistant instead of an  academic  crutch.

Educationists, faculty members, and senior researchers also have a responsibility to guide the next generation of researchers in the ethical and appropriate use of AI tools. This will ensure that they develop critical analysis skills and a deeper understanding necessary for excellence in medical practice and research. All- important stakeholders, including faculty members, medical educationists, researchers, students, and administrators, should engage in an open and ongoing dialogue about the ethical and effective use of  AI  in  medical  research.

COMPETING  INTEREST:
The   authors   declared   no   conflict   of   interest.

AUTHORS’ CONTRIBUTION:
FAR, FF: Conceived the idea, performed the literature review, wrote the first draft, approved the manuscript, and took respon- sibility for the content.
Both authors approved the final version of the manuscript to be published. 

REFERENCES

  1. Gola A, Das A, Gumataj AB, Amirdhavarshini S, Venkata-chalam J. Artificial intelligence and its role in medical research. Curr Med Issues 2024; 22(2):97-101. doi: 10. 4103/cmi.cmi_147_23.
  2. Jin Q, Leaman R, Lu Z. PubMed and beyond: Biomedical literature search in the age of artificial intelligence. EBio Medicine 2024; 100:104988. doi: 10.1016/j.ebiom.2024. 104988.
  3. Giglio AD, Costa MUPD. The use of artificial intelligence to improve the scientific writing of non-native English speakers. Rev Assoc Med Bras (1992) 2023; 69(9):e20230560. doi: 10.1590/1806-9282.20230560.
  4. Ruksakulpiwat S, Kumar A, Ajibade A. Using ChatGPT in medical research: Current status and future directions. J Multidiscip Healthc 2023; 16:1513-20. doi: 10.2147/JMDH. S413470.
  5. Majovsky M, Cerny M, Kasal M, Komarc M, Netuka D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora's box has been opened. J Med Internet Res 2023; 25:e46924. doi: 10.2196/46924.
  6. Alkaissi H, McFarlane SI. Artificial Hallucinations in ChatGPT: Implications in Scientific Writing. Cureus 2023; 15(2):e35179. doi: 10.7759/cureus.35179. 
  7. Campos Zabala FJ. How younger generations see the future of AI. In: Grow your business with AI. Berkeley, CA: Apress; 2023. doi: httpss://doi.org/10.1007/978-1-4842-9669-1_22.
  8. Chan CKY, Lee KKW. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and millennial generation teachers? Smart Learn Environ 2023; 10:60. URL: httpss://doi.org/10.1186/s40561-023-00269-3.
  9. Kadi G, Aslaner MA. Exploring ChatGPT's abilities in medical article writing and peer review. Croat Med J 2024; 65(2): 93-100. doi: 10.3325/cmj.2024.65.93.
  10. Howard J, Cheung HC. Artificial intelligence in medical writing. Asia Intervention 2024 ; 10(1):12-14. doi: 10.4244/AIJ-E-23-00 005.
  11. Kitamura FC. ChatGPT Is shaping the future of medical writing but still requires human judgment. Radiology 2023 ; 307(2): e230171. doi: 10.1148/radiol.230171.
  12. Ibrahim H, Liu X, Denniston AK. Reporting guidelines for artificial intelligence in healthcare research. Clin Exp Ophthal-mol 2021; 49(5):470-476. doi: 10.1111/ceo.13943.
  13. Wiwanitmkit S, Wiwanitkit V. Artificial intelligence, academic publishing, scientific writing, peer review, and ethics. Braz J Cardiovasc Surg 2024; 39(4):e20230377. doi: 10.21470/ 1678-9741-2023-0377.