Application of artificial intelligence and ChatGPT in medical writing: a narrative review
Review Article

Application of artificial intelligence and ChatGPT in medical writing: a narrative review

Ashkan Fakharifar1, Zahra Beizavi2, Alireza Pouramini3, Sara Haseli4

1Guilan University of Medical Sciences, Rasht, Guilan, Iran; 2Department of Radiology, University of Florida College of Medicine, Jacksonville, FL, USA; 3Medical Mycology and Bacteriology Research Center, Kerman University of Medical Sciences, Kerman, Iran; 4Department of Radiology, UW/FHCC, Seattle, WA, USA

Contributions: (I) Conception and design: All authors; (II) Administrative support: All authors; (III) Provision of study materials or patients: All authors; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: All authors; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Zahra Beizavi, MD. Department of Radiology, University of Florida, 655 W Eighth St., Jacksonville, FL 32209, USA. Email: zahra.beizavi@gmail.com.

Background and Objective: The integration of artificial intelligence (AI) and large language models (LLMs) into medical writing has transformed scientific communication. In the present review, the applications, benefits, and constraints of AI tools, including Chat Generative Pre-trained Transformer (ChatGPT) and other LLMs, in medical writing are contrasted.

Methods: A narrative review was performed by searching the PubMed/MEDLINE, Web of Science, Scopus, and Google Scholar databases conducted between January 2022 to October 2024 with English restrictions. The search strategy was mainly centered around AI language models and their different applications within the field of medical writing, with specific emphasis on applications, implementation strategies, and ethical issues.

Key Content and Findings: The study outlined some significant uses of AI in medical writing, such as automated document production, synthesis of literature, and content creation. ChatGPT is good at generating blocks of text and increasing writing efficiency but needs human input to guarantee accuracy. Comparative studies comparing AI language models revealed discrepancies in performance, with a higher quantitative accuracy for ChatGPT-4 when compared with Microsoft Bing and Claude 2. Several issues were significant, including, but not limited to, accuracy concerns, bias, and ethics concerns. In 2024, ICMJE crafted a model for AI use, in which value in transparency and accountability took a high position. The big medical journals adopted a different model, with an emphasis on confidentiality, requirements for disclosures, and assurance of AI-facilitated integrity in authorship.

Conclusions: Medical writing with AI is full of potential and full of challenge. AI technology possesses the potential for efficacy and productivity maximization in scientific messaging, but use must include careful consideration of accuracy, ethics, and practice. The 2024 ICMJE guidelines present key requirements for responsible AI use, including, but not limited to, value in transparency and accountability. Success in this new environment will occur when an integral approach is taken in optimizing AI potential in maintaining mastery and integrity in scientific practice. Advance will occur with ongoing processes for evaluation and large studies for validation, which assure medical literature validity.

Keywords: Medical writing; Chat Generative Pre-trained Transformer (ChatGPT); large language models (LLMs); ethics; International Committee of Medical Journal Editors guidelines (ICMJE guidelines)


Received: 21 September 2024; Accepted: 14 April 2025; Published online: 26 May 2025.

doi: 10.21037/jmai-24-342


Introduction

Artificial intelligence (AI) is quickly changing various elements, including medical writing (1). Effective medical writing is essential for the accurate dissemination of knowledge, facilitating informed decision-making and enhancing communication within the medical and scientific communities (2). A growing volume of medical research is published every day, necessitating concise and accessible information that is scientifically accurate, prompting the development of innovative methods to improve the speed and precision of medical writing (3).

Large language models (LLMs) can transform medical writing. Numerous massive language models provide distinctive functionalities to improve medical writing endeavors. These encompass Chat Generative Pre-trained Transformer (ChatGPT) (specifically GPT-4 and GPT-4o), Claude, Microsoft’s Copilot and Gemini AI (4,5). These advancements provide medical writers with enhanced tools to generate superior quality writing (6).

The use of LLMs in medical writing raises significant concerns about authorship, transparency, and accountability. In response to these concerns, the International Committee of Medical Journal Editors (ICMJE) released revised rules in January 2024, specifically pertaining to the utilization of AI-assisted technology in manuscript preparation (7,8).

This review article examines the application of ChatGPT and other LLMs in medical writing by elucidating the background of these technologies, their advantages, limitations, and ethical considerations. The specific objectives of the study are:

  • To investigate the prospective utilization of LLMs in medical writing.
  • To conduct a critical analysis of the merits and demerits of LLMs.
  • Evaluate the ethical concerns of LLMs in medical writing.

This review delivers a balanced and evidence-based understanding of the influence of LLMs on the future of medical writing by concentrating on three critical areas. It aims to provide practical techniques for medical writers, researchers, and healthcare professionals regarding the ethical and effective application of AI tools to enhance communication and augment healthcare knowledge. We present this article in accordance with the Narrative Review reporting checklist (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-342/rc).


Methods

This review utilized a narrative methodology to assess the function of AI tools in medical writing. A narrative review was performed utilizing the multiple databases, with details of the search strategy outlined in Table 1, from January 2022 to October 2024 with English restrictions and using following keyword: (“Artificial Intelligence” OR “ChatGPT” OR “GPT-4” OR “Claude” OR “Gemini” OR “Copilot”) AND (“medical writing” OR “scientific writing” OR “academic writing”).

Table 1

The search strategy summary

Items Specification
Date of search January, 1 2022, October, 30 2024
Databases and other sources searched PubMed/MEDLINE, Scopus, Web of Science, Google Scholar
Search terms used (including MeSH and free text search terms and filters) ((“Artificial Intelligence” OR “ChatGPT” OR “GPT-4” OR “Claude” OR “Gemini” OR “Copilot”) AND (“medical writing” OR “scientific writing” OR “academic writing”)
Timeframe January 2022 to October 2024
Inclusion and exclusion criteria Inclusion criteria:
     ⬥ Studies examining AI applications in medical writing
     ⬥ Peer-reviewed articles
     ⬥ English language publications
     ⬥ Original research, systematic reviews, and meta-analyses
Exclusion criteria:
     ⬥ Not English studies
Selection process All authors were involved with
     ⬥ Define the search questions
     ⬥ Conduct literature search
     ⬥ Full text review
     ⬥ Final selection
AI, artificial intelligence.

English studies examining AI applications in medical writing such as peer-reviewed articles original research, systematic reviews, and meta-analyses were used for reviewing writing applications, Implementation strategies, Quality metrics, and Ethical considerations of using AI in medical writing.


Review of literature

AI algorithms can be used for data analysis and interpretation in medical writing (9). Ensuring that AI systems used for medical data analysis are explainable and reliable (10). AI can also be used for automated literature review and citation generation (11,12). The application of AI tools can save time for medical practitioners and enhance the accuracy and efficiency of medical writing (13). However, reliance on online grammar checkers and automated citation generators may not always be effective in identifying and giving feedback on particular grammatical forms, phrasing, and issues relating to style (14).

Improving language translation, localization, and communication

Another area where AI can improve medical writing is in language translation and localization (15). Explainability in artificial intelligence (XAI) is also important in medical writing. XAI aims to improve our understanding of why a given output has been produced by an AI system, and can be used in automated medical report generation (16).

Medical writing plays a vital role in healthcare, involving the production of documents that convey scientific data to different groups such as healthcare providers, regulatory bodies, and patients (17).

With the revolution of AI and the emergence of ChatGPT, academic writing, and consequently writing in the medical field, has undergone changes that can have positive and negative aspects (18). ChatGPT can enhance medical scientific writing, but it wouldn’t replace human authors entirely (19). However, it aids in summarizing articles, generating drafts, translating content, and locating academic papers, the accuracy and reliability of the generated text must be validated and monitored (19). The use of ChatGPT in scientific writing should be regulated and carefully monitored due to ethical considerations (20). Research concerning use and impact of ChatGPT in medical care have increased recently that demonstrate the necessity of conducting additional studies in evaluating ChatGPT efficacy and ethical consideration (19).

Introduction and fundamentals of ChatGPT

ChatGPT, an AI chatbot, was publicly released in November 2022 (21,22). ChatGPT is a sophisticated chatbot developed on OpenAI’s GPT technology, utilizing deep learning algorithms for generating human-like output in response to text prompts (23). It can potentially transform scientific writing in medical research (11). It can generate automated drafts, summarize articles, and translate contents in any language, all contributing towards easier and less cumbersome academic writing (11). Nevertheless, it is important to note that generated contents have to go through thorough review and testing by medical and healthcare professionals to ensure accuracy, dependability, and avoidance of bias and misinformation (24). In addition, retraining of the models frequently is important to maintain them current with changing medical knowledge (25).

ChatGPT employs deep learning algorithms for providing quick and immediate responses to queries (26). Its use covers adaptive learning programs, where it can adjust difficulty in contents in relation to student performance (27). OpenAI continues to enhance ChatGPT’s performance through reinforcement techniques, natural language processing (NLP), and machine learning (ML), providing rapid and precise responses to a wide range of requests related to writing, information provision, composition of messages, text editing, and problem-solving (11).

NLM-based AI, specifically ChatGPT, can create new content from the beginning with a smooth conversational flow, adeptly respond to questions, and produce various things like poems, fan fiction, and stories for children (28), of which have recently sparked much debate (29). It has revolutionized the field of academic research writing, offering powerful tools like ChatGPT and other AI tools to aid researchers in content generation and idea enhancement. Another article raises ethical concerns about the use of AI technologies such as ChatGPT in creating written works for college assignments, emphasizing the need for responsible decision-making as AI becomes more commonplace in academia (30). In addition, a review article offers guidance on maximizing the advantages and reducing the peril of utilizing ChatGPT in academic writing, emphasizing the importance of upholding human judgment throughout the writing process and using ChatGPT as an assistant tool rather than a complete substitute for human (31).

Applications of ChatGPT in medical writing

Benefits of AI and ChatGPT in medical writing

Time-efficient: ChatGPT has the potential to revolutionize medical writing, offering a quick and time-efficient process (32). This language model is designed to extract relevant information from various sources, aiding in literature searches by quickly identifying key findings and synthesizing them into a cohesive summary (32). Furthermore, ChatGPT can generate a rough draft for medical writers while saving significant time and effort in the writing process (33). By streamlining these tasks, ChatGPT empowers medical professionals to focus more on analyzing and interpreting data, ultimately advancing scientific knowledge and improving healthcare outcomes (33).

Automated manuscript generation

ChatGPT, built on the GPT framework, is capable of producing manuscript sections like the introduction, methods, and results (28). The precision and pertinence of the generated content are currently under scrutiny (34). A recent study employed ChatGPT to create two scholarly articles in the field of sports and exercise medicine, analyzing the advantages and drawbacks of using AI in manuscript creation (28). This study highlights that while AI can generate high-quality papers, there are still concerns about evaluating the quality and originality, Additionally, the bibliography generated for two essays was inaccurate (28).

Enhancing literature searches and data extraction

ChatGPT’s primary strength is its prompt comprehension of data and its ability to connect evidence to draw conclusions more swiftly (11). Humans have limitations including the inability to read extensive literature and find meaningful connections between seemingly unrelated information (11). This challenging issue can be mitigated to some extent. Enhancing AI model training with trusted, precise, and contextually appropriate data collections—rather than merely large datasets—and incorporating ongoing, credible user feedback at a basic level can help with curtailing extreme data shifts and the decay of information relevancy over time, as well as AI hallucinations (35). Nonetheless, re-training AI systems and the prompt detection of their declining performance present difficulties like catastrophic forgetting and convergence failures, which are particularly problematic in sensitive fields like healthcare and scientific publication (35). While ChatGPT and similar AI technologies are valuable tools, they should not be the sole resource for producing research proposals in medical or scientific contexts. Researchers must accept accountability for any inaccuracies produced by ChatGPT when used in scientific documentation (35).

Generating high-quality medical content

Producing high-quality medical content requires careful use of AI tools like ChatGPT (36). While these tools can aid in drafting research proposals and other academic content, users must retain responsibility for verifying the accuracy of AI-generated contents (35). Additionally, ethical concerns such as bias, misinformation, privacy, transparency, and plagiarism must be addressed when using AI in medical writing (37,38). Transparency regarding AI’s involvement in content creation is also essential in maintaining accountability (39).

Generative language models like ChatGPT can be powerful tools in medical writing. These models support the creation of various documents, such as clinical trial summaries, personalized patient reports, and textbook contributions. Although AI enhances efficiency, it is crucial to remain alert to potential ethical issues and guarantee the output’s quality (25,37). These tools have the potential to enhance workflow efficiency and productivity in the medical field, but caution must be exercised to address ethical dilemmas and ensure high-quality output (32).

Automating the creation of standard documents reduces errors by humans and the requirement for clinical staff and medical writers to validate the accurate input of information into extensive patient databases (40). This leads to a decrease in the initial stages of clinical trials, improves the selection of participants in studies, and enhances quality control throughout the process (40).

Summary

The incorporation of ChatGPT in medical writing includes manuscript creation, literature reviews, and data extraction. Although the technology exhibits proficiency in generating manuscript segments and synthesizing material, existing evidence reveals ongoing difficulties related to correctness, citation dependability, and content originality. Human supervision and verification are crucial elements of the writing process.

The use and impact of other AI-language models in scientific writing

AI tools like Google Gemini, Claude, and Microsoft Bing are increasingly recognized for their contributions to medical and scientific writing, though comparative data remains limited.

  • ChatGPT-4: nears the passing grade in quantitative accuracy -5 (4).
  • Microsoft Bing: performs better than other models but falls short of ChatGPT-4 with a score of -21 (4).
  • Claude 2: scores -75, indicating areas that could use some work (4).
  • Google Gemini: studies on bias perception have shown contradictory findings; whilst ChatGPT-4 lessens bias, Gemini marginally increases it (5).

Literature reviews, text composition, editing, citation management, and visual creation all extensively utilize AI models. Careful integration and continuous improvement must be adopted to go beyond their weaknesses, but they have tremendous potential for enhancing productivity and quality of writings (13,41).

Summary

Comparison review of current AI language models reveals varying capabilities in medical writing use cases. ChatGPT-4 shows increased quantitative accuracy, but other models have specific strengths and weaknesses. Successful use of these tools involves careful analysis of comparative advantages and inherent weaknesses in medical writing scenarios.

Challenges and limitations of using AI and ChatGPT

ChatGPT is free at present, but its availability is restricted in specific countries, such as Hong Kong, China, and Russia (42). Present-day limitations and challenges include:

Accuracy, bias, and consistency

AI in medical writing encounters some significant challenges. The utilization of AI and ChatGPT entails an elevated chance of medical errors due to their propensity to generate erroneous responses (43). The result is that AI-generated content has to be validated thoroughly (44). The second concern is that AI training datasets may have intrinsic biases, resulting in biased outcomes. This would compromise the necessity for accuracy and impartiality in medical writing (45,46).

Third, the replies generated by these tools frequently demonstrate inadequacy and inconsistency, which is particularly troubling in the medical field, where precision and comprehensiveness are paramount (46).

Ethical considerations and challenges

The ethical concerns of using AI in medical writing surpass mere technical limitations. Plagiarism has emerged as a prominent issue, especially in light of confirmed instances of considerable duplication in AI journalism (48). The utilization of AI tools in article creation raises additional ethical concerns (49).

Ensuring the accuracy and reliability of AI-generated content

Content verification: distinguishing between content generated by AI and that authored by humans, particularly in academic abstracts (50).

Detection methods: AI Detector, GPT Detector, and GPTZero are tools for identifying AI-generated text, with confidence ratings and classifications (47).

The scope of quality assurance: although tools such as iThenticate and Turnitin assist in content verification, they cannot surpass the meticulousness of a human editor’s scrutiny (48).

Potential manipulation: concerns exist that authors may modify AI-generated content to evade detection (49).

Summary

The application of AI, particularly ChatGPT, in medical writing encounters substantial problems related to accuracy, bias, and consistency, necessitating comprehensive assessment of AI-generated material. Intrinsic biases in AI training datasets may undermine the accuracy of medical documentation. Ethical considerations, especially plagiarism, are major concerns, while distinguishing between AI-generated and human-authored work remains challenging despite current detection tools, requiring comprehensive human editorial oversight.

Safeguarding scientific integrity

Recent studies have revealed systematic manipulation of scientific literature by paper mills employing advanced techniques of fraud, plagiarism, and data fabrication (50). The incorporation of AI capabilities has heightened these worries, as such technologies facilitate more advanced fraudulent activities that are progressively difficult to identify (50).

Paper mills have significantly influenced scientific publications by:

  • Systematic invasion across several journal platforms (50).
  • Violation of research integrity through falsified data and altered imagery (50).
  • Disproportionate impact on biomedical literature (51).

ICMJE guidelines:

  • The International Committee of Medical Journal Editors (ICMJE) has addressed to these difficulties with detailed guidelines issued in January 2024, focusing on both paper mill operations and the use of AI in medical writing (7).

These guidelines establish:

  • Authors must disclose and be transparent about the use of AI or AI-assisted technology in writing manuscripts.
  • Authors bear full responsibility for the entire content of their work, including any portions generated or influenced by AI tools.
  • Authors must ensure and verify the accuracy of all content in their manuscripts, whether AI-assisted or not.
  • Authors must attribute and cite AI tools appropriately when used in manuscript preparation.

These guidelines seek to uphold scientific integrity while recognizing the changing function of AI in medical writing (7). The ICMJE asserts that although AI tools can aid in manuscript preparation, the final accountability for content accuracy and integrity is with the human authors (50,52).

Current journal positions on AI use in medical writing and peer review

In accordance with the previously discussed ICMJE recommendations, authors must explicitly disclose any AI assistance utilized in manuscript preparation or data generation (7).

Major medical journals have adopted varying but generally cautious positions:

  • JAMA Network requires authors to disclose AI tool usage during submission and prohibits AI tools as authors (53).
  • The Lancet maintains strict policies against using generative AI for scientific review of papers (54).
  • NEJM follows ICMJE guidelines while developing specialized approaches through their NEJM AI initiative (55).

Journal policies on AI in peer review reflect three key concerns:

  • Confidentiality protection:
    • Reviewers are prohibited from uploading manuscripts to AI platforms that cannot guarantee confidentiality (54).
    • JAMA explicitly forbids entering any portion of manuscripts or reviewer comments into chatbots or language models (53).
  • Transparency requirements:
    • Reviewers must disclose any AI tool usage and methodology to journal editors (54).
    • Single-blind review processes require documentation of AI assistance in formal reviews (53).
  • Quality assurance:
    • Journals emphasize human oversight and accountability for review content (56).
    • AI tools are permitted only as supplements to, not replacements for, expert review (57).

Implementation Challenges

Technical Concerns

Accuracy, Bias, and Consistency

Ethical Considerations and Challenges

Regulatory Framework

Major research institutions have established clear boundaries:

  • NIH has banned generative AI use in peer review processes (58).
  • Australian Research Council forbids the use of AI in review processes (58).
  • Publishers like Elsevier limit AI use to in-house or licensed technologies (59).

The medical publishing landscape is continuously evolving with AI integration:

  • Specialized AI detection tool development for manuscript screening (60).
  • Use of standardized disclosure protocols (60).
  • Journal-specific AI policies have been created to balance innovation with scientific integrity (60).

This systematic approach by medical journals is indicative of a cautious balance between leveraging AI’s potential benefits while sustaining scientific rigor and ethical standards in medical publishing.

Summary

The integrity of scientific literature is threatened because paper mills use their devious practices, further enhanced with the advancement of AI. In 2024, the ICMJE identified the importance of disclosure of AI’s use and the accountability of authors. Leading medical journals enforce various rules on confidentiality, requirements about transparency, and quality assurance, while strict rules are set by research institutions on the use of AI during peer review for maintaining scientific integrity.

Guidelines for the application of AI in medical writing

How to use AI in medical writing:

  • Utilize AI tools for draft generation with human supervision.
  • Conduct thorough validation of AI-generated material.
  • Guarantee transparent acknowledgment of AI involvement.
  • Uphold responsibility for content precision.

How not to use AI in medical writing:

  • Do not rely on AI-generated content without verification and expert review.
  • Do not accept AI-generated citations or references without manual checking.
  • Do not use AI for final content decisions or scientific interpretations.
  • Do not submit AI-generated content without proper disclosure.
  • Do not allow AI to make unsupervised decisions about data analysis or conclusions.

These Guidelines match with the latest ICMJE recommendations and underscore the significance of human proficiency in medical writing while correctly utilizing AI capabilities.


Discussion

Benefits of the review

  • A comprehensive examination of ChatGPT and diverse AI applications in medical writing via systematic database searches.
  • Incorporation of recent publications through 2024, integrating the latest advancements in the field.
  • Incorporation of the ICMJE’s 2024 guidelines on the application of AI in medical writing.
  • Analysis of technical and ethical considerations in the deployment of AI.

Study limitations

  • Many studies have examined previous versions of AI models, potentially limiting their relevance in the current context.
  • Lack of comprehensive long-term follow-up data on the outcomes of AI implementation.
  • Variability in methodological approaches across studies evaluating AI effectiveness.
  • Numerous studies focused on English-language content, thus limiting their global relevance.
  • The limitation of data regarding alternative AI tools like Gemini, Copilot, and Claude hinders a comprehensive comparative analysis with ChatGPT.
  • Techniques for evaluating inconsistent quality among the included studies.
  • Absence of standardized metrics for assessing AI-generated content.
  • There is a lack of adequate validation studies for AI detection tools.
  • Insufficient comprehensive data on the cost-effectiveness of AI implementation.

Future possible research

  • Comprehensive and diverse datasets are essential for the thorough validation of AI applications.
  • Standardized evaluation protocols are crucial.
  • Longitudinal studies are crucial for assessing long-term effects.
  • Further investigation into bias detection and mitigation strategies is necessary.
  • Additional research is required to evaluate the medical writing capabilities of alternative AI tools, such as Claude, Gemini, and Copilot, in comparison to ChatGPT.

Conclusions

The use of AI software, including ChatGPT, in medical writing is a significant milestone in scientific communication. As exemplified by this article, while AI software brings great benefits in terms of productivity, draft creation, and publication production, its use needs to be approached carefully taking into account a series of considerations. The capacity of the technology to increase productivity has to be weighed against issues of accuracy, prejudice, and ethics.

2024 ICMJE guidelines have spelled out the primary protocols to be followed while using AI in medical writing, such as openness, accountability, and proper credit. Some guidelines have been followed by leading medical journals to protect scientific integrity while embracing technological progress. Substantial challenges remain in the form of verification of accuracy, avoidance of bias, and requirements of standardized methods for evaluation.

Effective application of AI to medical writing should have a balanced strategy that combines the application of technological advancements and human monitoring. Researchers and medical writers should implement sound validation processes, maintain maximum transparency in terms of how and when AI would be utilized, and take responsibility for the accuracy of the content. Progress in this direction will be strengthened by standardized assessment metrics, large-scale validation studies, and longitudinal studies taking into account the long-term impact on scientific communication quality.

The development of AI technologies in medical writing continuously presents a series of challenges and opportunities. Although these technologies present potential new paths to enhancing efficiency and accessibility in scientific communication, their use must be under strict ethical principles and professional standards. The medical writing community must be keen to ensure that technological advancement is used to improve and not undermine the integrity and quality of scientific manuscripts.


Acknowledgments

None.


Footnote

Reporting Checklist: The authors have completed the Narrative Review reporting checklist. Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-342/rc

Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-342/prf

Funding: None.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-342/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Meşe I, Kuzan B, Kuzan TY. ChatGPT in medical writing: enhancing healthcare communication through artificial intelligence and human expertise. Anatol Curr Med J 2024;6:97-104. [Crossref]
  2. Weidmann AE. Artificial intelligence in academic writing and clinical pharmacy education: consequences and opportunities. Int J Clin Pharm 2024;46:751-4. [Crossref] [PubMed]
  3. Wang J, Liao Y, Liu S, et al. The impact of using ChatGPT on academic writing among medical undergraduates. Ann Med 2024;56:2426760. [Crossref] [PubMed]
  4. Lozić E, Štular B. ChatGPT v Bard v Bing v Claude 2 v Aria v human-expert. How good are AI chatbots at scientific writing? arXiv:2309.08636. Published online October 16, 2023.
  5. TowneBP. Exploring the Impact of Artificial Intelligence-Mediated Communication on Bias and Information Loss in Non-academic and Academic Writing Contexts. Available online: 10.31219/osf.io/6kyg5_v3
  6. Scientific Writing, Reviewing, and Editing for Open-access TESOL Journals: The Role of ChatGPT. Int J TESOL Stud 2023;5:87-91.
  7. ICMJE | News & Editorials. Available online: https://www.icmje.org/news-and-editorials/
  8. Alser M, Waisberg E. Concerns with the Usage of ChatGPT in Academia and Medicine: A Viewpoint. Am J Med Open 2023;9:100036. [Crossref] [PubMed]
  9. Segarra-Saavedra J, Cristòfol FJ, Martínez-Sala AM. Artificial intelligence (AI) applied to informative documentation and journalistic sports writing. The case of BeSoccer. Doxa Comun 2019; [Crossref]
  10. Holzinger A, Kieseberg P, Weippl E, et al. Current Advances, Trends and Challenges of Machine Learning and Knowledge Extraction: From Machine Learning to Explainable AI. In: Holzinger A, Kieseberg P, Tjoa AM, et al., eds. Machine Learning and Knowledge Extraction. Springer International Publishing; 2018:1-8.
  11. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Crit Care 2023;27:75. [Crossref] [PubMed]
  12. Hutson M. Could AI help you to write your next paper? Nature 2022;611:192-3. [Crossref] [PubMed]
  13. Lee PY, Salim H, Abdullah A, et al. Use of ChatGPT in medical research and scientific writing. Malays Fam Physician 2023;18:58. [Crossref] [PubMed]
  14. Long RW. Online grammar checkers versus self-editing: An investigation of error correction rates and writing quality. Journal of Nusantara Studies 2022;7:441-58. (JONUS). [Crossref]
  15. Huh S. Emergence of the metaverse and ChatGPT in journal publishing after the COVID-19 pandemic. Sci Ed. 2023;10:1-4. [Crossref]
  16. Amann J, Blasimme A, Vayena E, et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020;20:310. [Crossref] [PubMed]
  17. Sharma S. How to become a competent medical writer? Perspect Clin Res 2010;1:33-7. [Crossref] [PubMed]
  18. Li M, Zhang Y, Sun Y, et al. AI-based ChatGPT Impact on Medical Writing and Publication. Advanced Ultrasound in Diagnosis and Therapy 2023;7:188-92. [Crossref]
  19. Huang J, Tan M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am J Cancer Res 2023;13:1148-54. [PubMed]
  20. Hu G. Challenges for enforcing editorial policies on AI-generated papers. Account Res 2024;31:978-80. [Crossref] [PubMed]
  21. Fatani B. ChatGPT for Future Medical and Dental Research. Cureus 2023;15:e37285. [Crossref] [PubMed]
  22. Huh S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking a parasitology examination?: a descriptive study. J Educ Eval Health Prof 2023;20:1. [PubMed]
  23. Lund BD, Wang T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries? Libr Hi Tech News 2023;40:26-9. [Crossref]
  24. Fröhling L, Zubiaga A. Feature-based detection of automated language models: tackling GPT-2, GPT-3 and Grover. PeerJ Comput Sci 2021;7:e443. [Crossref] [PubMed]
  25. Doyal AS, Sender D, Nanda M, et al. ChatGPT and Artificial Intelligence in Medical Writing: Concerns and Ethical Considerations. Cureus 2023;15:e43292. [PubMed]
  26. Fraiwan M, Khasawneh N. A Review of ChatGPT Applications in Education, Marketing, Software Engineering, and Healthcare: Benefits, Drawbacks, and Research Directions. Published online April 29, 2023. DOI:10.48550/arXiv.2305.00237.
  27. Opara E, Mfon-Ette Theresa A, Tolorunleke C. ChatGPT for teaching, learning and research: Prospects and challenges. Global Academic Journal of Humanities and Social Sciences 2023;5:33-40. [Crossref]
  28. Anderson N, Belavy DL, Perle SM, et al. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in Sports & Exercise Medicine manuscript generation. BMJ Open Sport Exerc Med 2023;9:e001568. [Crossref] [PubMed]
  29. Ghosh A, Maini Jindal N, Gupta VK, et al. Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?. Cureus 2023;15:e47329. [Crossref] [PubMed]
  30. Lin Z. Why and how to embrace AI such as ChatGPT in your academic life. R Soc Open Sci 2023;10:230658. [Crossref] [PubMed]
  31. Mondal H, Mondal S. ChatGPT in academic writing: Maximizing its benefits and minimizing the risks. Indian J Ophthalmol 2023;71:3600-6. [Crossref] [PubMed]
  32. Biswas S. ChatGPT and the Future of Medical Writing. Radiology 2023;307:e223312. [Crossref] [PubMed]
  33. Arif TB, Munaf U, Ul-Haque I. The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Med Educ Online 2023;28:2181052. [Crossref] [PubMed]
  34. Sedaghat S. Early applications of ChatGPT in medical practice, education and research. Clin Med (Lond) 2023;23:278-9. [Crossref] [PubMed]
  35. Athaluri SA, Manthena SV, Kesapragada VSRKM, et al. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. Cureus 2023;15:e37432. [Crossref] [PubMed]
  36. Shen Y, Heacock L, Elias J, et al. ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology 2023;307:e230163. [Crossref] [PubMed]
  37. Chavez MR, Butler TS, Rekawek P, et al. Chat Generative Pre-trained Transformer: why we should embrace this technology. Am J Obstet Gynecol 2023;228:706-11. [Crossref] [PubMed]
  38. Flanagin A, Bibbins-Domingo K, Berkwits M, et al. Nonhuman “Authors” and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA 2023;329:637-9. [Crossref] [PubMed]
  39. Rennie D, Flanagin A. Authorship! Authorship! Guests, ghosts, grafters, and the two-sided coin. JAMA 1994;271:469-71. [Crossref] [PubMed]
  40. Costa S. Embracing a new friendship: Artificial intelligence and medical writers. Med Writ 2019;28:14-7.
  41. Liu H, Azam M, Bin Naeem S, et al. An overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. Health Info Libr J 2023;40:440-6. [Crossref] [PubMed]
  42. Bao H, Sun M, Teplitskiy M. Where there’s a will there’s a way: ChatGPT is used more for science in countries where it is prohibited. Doi: 10.1162/qss_a_00368. Published online June 27, 2024.10.1162/qss_a_00368
  43. Mojadeddi ZM, Rosenberg J. The impact of AI and ChatGPT on research reporting. N Z Med J 2023;136:60-4. [PubMed]
  44. Bhattacharyya M, Miller VM, Bhattacharyya D, et al. High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus 2023;15:e39238. [Crossref] [PubMed]
  45. Homolak J. Opportunities and risks of ChatGPT in medicine, science, and academic publishing: a modern Promethean dilemma. Croat Med J 2023;64:1-3. [Crossref] [PubMed]
  46. Garg RK, Urs VL, Agarwal AA, et al. Exploring the role of ChatGPT in patient care (diagnosis and treatment) and medical research: A systematic review. Health Promot Perspect 2023;13:183-91. [Crossref] [PubMed]
  47. Elkhatat AM, Elsaid K, Almeer S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integr 2023;19:17. [Crossref]
  48. Memon AR. Similarity and Plagiarism in Scholarly Journal Submissions: Bringing Clarity to the Concept for Authors, Reviewers and Editors. J Korean Med Sci 2020;35:e217. [Crossref] [PubMed]
  49. CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism. Futurism. Available online: https://futurism.com/cnet-ai-plagiarism
  50. Miao J, Thongprayoon C, Suppadungsuk S, et al. Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clin Pract 2023;14:89-105. [Crossref] [PubMed]
  51. Sharma H, Ruikar M. Artificial intelligence at the pen’s edge: Exploring the ethical quagmires in using artificial intelligence models like ChatGPT for assisted writing in biomedical research. Perspect Clin Res 2024;15:108-15. [Crossref] [PubMed]
  52. Ahn S. The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions. Korean J Physiol Pharmacol 2024;28:393-401. [Crossref] [PubMed]
  53. Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots. JAMA 2023;330:702-3. [Crossref] [PubMed]
  54. Cheng K, Sun Z, Liu X, et al. Generative artificial intelligence is infiltrating peer review process. Crit Care 2024;28:149. [Crossref] [PubMed]
  55. Fornalik M, Makuch M, Lemanska A, et al. Rise of the machines: trends and challenges of implementing AI in biomedical scientific writing. Explor Digit Health Technol 2024;2:235-48. [Crossref]
  56. Leung TI, de Azevedo Cardoso T, Mavragani A, et al. Best Practices for Using AI Tools as an Author, Peer Reviewer, or Editor. J Med Internet Res 2023;25:e51584. [Crossref] [PubMed]
  57. Fabiano N, Gupta A, Bhambra N, et al. How to optimize the systematic review process using AI tools. JCPP Adv 2024;4:e12234. [Crossref] [PubMed]
  58. Dergaa I, Ben Saad H, Glenn JM, et al. A thorough examination of ChatGPT-3.5 potential applications in medical writing: A preliminary study. Medicine (Baltimore) 2024;103:e39757. [Crossref] [PubMed]
  59. Tessler I, Wolfovitz A, Livneh N, et al. Advancing Medical Practice with Artificial Intelligence: ChatGPT in Healthcare. Isr Med Assoc J 2024;26:80-5. [PubMed]
  60. Fatima A, Shafique MA, Alam K, et al. ChatGPT in medicine: A cross-disciplinary systematic review of ChatGPT’s (artificial intelligence) role in research, clinical practice, education, and patient interaction. Medicine (Baltimore) 2024;103:e39250. [Crossref] [PubMed]
doi: 10.21037/jmai-24-342
Cite this article as: Fakharifar A, Beizavi Z, Pouramini A, Haseli S. Application of artificial intelligence and ChatGPT in medical writing: a narrative review. J Med Artif Intell 2025;8:52.

Download Citation