Skip to content
Publicly Available Published by De Gruyter November 30, 2023

Rising adoption of artificial intelligence in scientific publishing: evaluating the role, risks, and ethical implications in paper drafting and review process

  • Anna Carobene ORCID logo EMAIL logo , Andrea Padoan ORCID logo , Federico Cabitza , Giuseppe Banfi and Mario Plebani ORCID logo

Abstract

Background

In the rapid evolving landscape of artificial intelligence (AI), scientific publishing is experiencing significant transformations. AI tools, while offering unparalleled efficiencies in paper drafting and peer review, also introduce notable ethical concerns.

Content

This study delineates AI’s dual role in scientific publishing: as a co-creator in the writing and review of scientific papers and as an ethical challenge. We first explore the potential of AI as an enhancer of efficiency, efficacy, and quality in creating scientific papers. A critical assessment follows, evaluating the risks vs. rewards for researchers, especially those early in their careers, emphasizing the need to maintain a balance between AI’s capabilities and fostering independent reasoning and creativity. Subsequently, we delve into the ethical dilemmas of AI’s involvement, particularly concerning originality, plagiarism, and preserving the genuine essence of scientific discourse. The evolving dynamics further highlight an overlooked aspect: the inadequate recognition of human reviewers in the academic community. With the increasing volume of scientific literature, tangible metrics and incentives for reviewers are proposed as essential to ensure a balanced academic environment.

Summary

AI’s incorporation in scientific publishing is promising yet comes with significant ethical and operational challenges. The role of human reviewers is accentuated, ensuring authenticity in an AI-influenced environment.

Outlook

As the scientific community treads the path of AI integration, a balanced symbiosis between AI’s efficiency and human discernment is pivotal. Emphasizing human expertise, while exploit artificial intelligence responsibly, will determine the trajectory of an ethically sound and efficient AI-augmented future in scientific publishing.

Background

In an era punctuated by the rapid advancements in artificial intelligence (AI), the academic and scientific realms are experiencing a paradigmatic shift in paper writing, research methodologies, and the publication process [1].

The ingress into academic disciplines of AI tools, such as chatbots like ChatGPT or writing assistant aid, propels a dialectic discourse on its benefits, ethical implications, and potential risks, particularly in terms of paper writing and the review process [2]. It creates a synergy where intelligence augmentation can offer a substantial impact on the efficiency, efficacy, and quality of scientific publications. The precision, speed, and data processing capabilities of AI have the potential to amplify human cognition and creativity, thereby fostering an enriched landscape of scientific inquiry and exploration [3, 4].

Firstly, this paper discusses the complexity of using AI in scientific publications and its role as a co-creator in writing and reviewing scientific articles. It examines the effectiveness and quality of AI-generated content, its capability to contribute ideas, and its collaborative nature in the refinement of papers. This exploration serves to illuminate the opportunities and challenges inherent in leveraging AI’s computational prowess in the creation of coherent, relevant, and accurate scientific narratives, as summarized in Table 1.

Table 1:

Advantages and disadvantages of artificial intelligence (AI) in scientific publishing.

Aspect of scientific publishing Advantages of AI tools Disadvantages of AI tools
Paper drafting.
  1. Assists in organizing and articulating ideas.

  2. Reduces time for high-quality content production.

  3. Offers prompt-based support to facilitate idea inception and elaboration.

  4. Conducts preliminary research, summarizing articles, and extracting key information from academic papers.

  5. Assists in optimizing the text, suggesting improvements in word choice.

  6. Helps in tailoring articles to fit the guidelines of various journals, increasing the chances of acceptance.

  1. Raises authorship attribution debates.

  2. Can generate incorrect or nonsensical information (hallucinations).

  3. Designed to please users, can obscure errors, and cause the spread of misinformation.

  4. Risks of intellectual property infringement and plagiarism.

  5. Reliance on AI could risk increasing volume of lower-quality scientific papers.

  6. AI could lead to a collapse of the scientific literature system.

  7. Can suggest inaccurate or incorrect references.

  8. AI could generate sentences that are logically and syntactically disjointed.

Data analysis and interpretation.
  1. AI can process large datasets quickly to identify patterns and trends.

  2. It enhances the accuracy of data interpretation by minimizing human error.

  3. AI can predict outcomes based on historical data, aiding in hypothesis generation.

  1. AI interpretations are only as good as the data and algorithms they rely on, which may have inherent biases.

  2. Unnecessary analyses might result in an incorrect interpretation of the data.

  3. Excessive dependence on AI may lead to a decline in researchers’ data analysis skills.

  4. Misinterpretation of AI results without proper statistical context can lead to incorrect conclusions.

Peer review.
  1. Enhances efficiency of paper assessment.

  2. Assists in identifying ethical, integrity, or quality issues.

  3. Supports plagiarism detection and maintaining content integrity.

  1. The ease of publication might overshadow the critical peer-review process, leading to less rigorous scientific standards.

  2. AI may lead to automation bias and deskilling.

  3. Excessive dependence on AI could compromise integrity of evaluations.

Ethical implications and plagiarism.
  1. Provides tools for detecting plagiarism.

  2. Encourages ethical writing practices and transparent disclosure.

  1. Challenges in maintaining originality in AI-generated content.

  2. Risks of perpetuating biases present in training data.

  3. Potential legal issues regarding copyright infringement and GDPR non-compliance.

Education and professional development.
  1. Could potentially enhance the education of young professionals.

  2. Enables quicker writing, increasing output of written documents.

  1. Risks undermining traditional learning pathways.

  2. May cause intellectual property issues and reduce responsibility of researchers.

  3. Introduction of hallucinations, such as incorrect or nonsensical information.

Recognition of reviewer contributions.
  1. AI tools could support reviewers in maintaining scientific integrity.

  2. Can assist in managing the increased volume of paper submissions.

  1. Potential devaluation of human reviewer expertise.

  2. Increased time demand for reviewers due to complex AI-generated texts.

  3. Lack of proper recognition and rewarding systems for reviewers.

In this paper, we have chosen to illustrate the integration of AI in scientific writing and publishing through ChatGPT, principally due to its broad recognition and extensive usage in the scientific field. This choice ensures the discussion is accessible and relevant to the widest possible audience, as ChatGPT is one of the AI tools with which researchers are most likely to be familiar. While other AI platforms like elicit (https://elicit.com/), which aids in formulating research questions, scite.ai (https://scite.ai/), which assists in citation analysis, and chatpdf (https://www.chatpdf.com/), which facilitates document annotation, offer specialized functions that significantly benefit the research process, ChatGPT’s example was selected for its pervasiveness, the comprehensive and general applicability of its technology across various aspects of scientific work.

Secondly, we delve into the equilibrium between innovation and learning, evaluating the risks and rewards inherent in employing AI for researchers, particularly those at the early stages of their careers, in scientific writing. It addresses how AI, while being a powerful tool for learning and innovation, must be balanced with a cultivation of critical thinking, creativity, and independent reasoning skills to avoid over-reliance and to preserve the essence of scientific inquiry.

Thirdly, the ethical ramifications of AI in scientific paper writing are explored, with a focus on ensuring originality, addressing plagiarism, and maintaining the authenticity of scientific discourse. This examination will underscore the importance of establishing ethical guidelines, transparency, and proper attribution to navigate the challenges and moral considerations posed by AI’s involvement in scientific writing.

This paper aims to contribute a comprehensive perspective to the ongoing discourse on the implications of AI in scientific writing. It strives to foster an understanding of AI not as a replacement but as a collaborator in the pursuit of knowledge, aiming to balance the scales between innovation and learning while addressing the pressing ethical considerations inherent in the integration of artificial intelligence into the realms of academic research and publication.

Light and shadow in the use of AI in scientific publishing: AI as a co-creator in writing and reviewing scientific articles

AI in paper composition

ChatGPT has been increasingly utilized as a resourceful tool for composing academic papers. A comprehensive search conducted in PubMed up to September 26, 2023, with no restrictions, revealed a substantial number of papers related to ChatGPT (1,313 publications). When refining the search to include the term ‘writing’, the number narrows down to 192 publications. Additionally, when specifically seeking publications where ChatGPT is listed as a [Corporate Author], a total of five publications were identified, illustrating the diverse range and involvement of ChatGPT in scientific literature up to the present date [5], [6], [7], [8], [9].

Interestingly, in our research, we found also a ‘corrigendum’ in which the author notes that ‘ChatGPT’ doesn’t meet the journal’s authorship criteria per Elsevier’s Ethics Policies [10, 11], and so has been removed as an author but acknowledged for its substantial contribution [12, 13].

However, it is worth noting that in the academic realm, the increasing utility of AI-driven tools like ChatGPT in the composition of scientific papers has recently sparked substantial debate and concern regarding the attribution of authorship to such tools. Prestigious scientific journals, including Nature, JAMA and Science, have explicitly articulated their positions on this matter, elucidating that AI models like ChatGPT do not fulfil the requisites for authorship recognition. The Editor-in-Chief of Nature, Magdalena Skipper, has stated that attributing authorship implies a level of accountability for the content, a criterion that cannot be feasibly applied to Language Learning Models such as ChatGPT [14]. She advocates for the transparent documentation of the use of such models in the development of papers, preferably within the methods or acknowledgements sections of the paper, aligning the use of AI tools with ethical and transparent scientific writing practices.

Similarly, the Editor-in-Chief of the Science family of journals, Holden Thorp, has categorically mentioned that AI models would not be accredited as authors in their publications [15]. He emphasized that any use of AI-generated contexts must be appropriately cited; failing to do so could equate to plagiarism, reinforcing the imperative need for clear attribution and acknowledgment when integrating AI-driven tools in scientific writing.

Likewise, the JAMA Network has updated its instructions for authors to specify that AI and related technologies do not qualify for authorship. If utilized, authors must take accountability for content generated by such tools. The use of AI in content creation or paper assistance must be transparently reported, typically in the acknowledgment or methods section, detailing the tool’s name, version, and manufacturer [16].

These stances, along with the editorial position of Clinical Chemistry and Laboratory Medicine (CCLM) [17], underscore a broader consensus within the scientific community on the ethical implications and considerations inherent in the use of AI in academic research and publication. The clear delineation of contributions made by AI and the acknowledgment of such contributions reflect a commitment to maintaining the integrity and authenticity of scientific discourse in an era increasingly influenced by advancements in artificial intelligence.

This paper, exploring the efficacy and efficiency of AI in paper creation, has been developed with the invaluable assistance of ChatGPT. While all conceptual developments and intellectual contributions are solely our own, ChatGPT has played a crucial role in organizing ideas and articulating them effectively, thereby significantly reducing the time and effort required to produce high-quality content. The utilization of ChatGPT demonstrates the potential of advanced AI in academic research to facilitate the expression of human thoughts and ideas in scientific writing.

Peer review with AI

The peer review process, a cornerstone in maintaining the rigor and credibility of scientific publishing, involves meticulous scrutiny of the paper by experts in the respective field, assessing its validity, significance, and originality.

In this light, AI tools can be potent allies in enhancing the efficiency and efficacy of the peer review process. AI can assist reviewers by automating the screening of papers, flagging potential issues related to ethics, integrity, or quality, and facilitating a more focused and constructive review by human experts. AI may assist editors and referees in avoiding the risk of plagiarism, highlighting the similarity in both sentences and data. This aids in refining the content, improving the quality of the final publication, and in some cases, expediting the overall review cycle.

However, transformers, that is architectures of artificial neural networks exhibit advanced capabilities – and are likely to continue improving – in condensing extensive contents, pinpointing central ideas and themes, and verifying paper adherence to given guidelines or checklists, once these parameters are integrated into their prompts and context [18]. This makes them significant assets in supporting the review process in academic publishing.

Therefore, the effectiveness of these AI models can paradoxically become a pitfall for users. It is well documented that reliance on efficient and effective systems for decision support or task performance can induce automation bias, leading ultimately to deskilling or the inhibition of skill development [19]. This is predominantly the case for tasks involving inquiry, information retrieval, judgment, evaluation, and the verification of correctness.

Consequently, excessive reliance on and the automation of crucial tasks using these models can affect the processes of reviewing and evaluating the quality of scientific papers. This overdependence risks impacting the rigor of review and selection processes, potentially compromising the thoroughness and integrity of evaluative activities due to the automation bias intrinsic to these sophisticated tools.

AI-generated prompts

In addition to aiding the writing and revision processes, AI can also offer support through generating structured prompts, facilitating the inception and elaboration of ideas.

In the field of AI, the term “prompt” denotes a user-provided input or seed text intended to invoke a specific type of response or output from the AI model [20]. Prompts serve as the initial stimuli ranging from specific questions, statements, or even incomplete sentences, and are instrumental in guiding AI models in generating coherent, contextually appropriate, and relevant responses. For example, a prompt might be structured as a question seeking information, clarification, or explanation on a specific topic, or it could be an incomplete sentence requiring the model to continue the text in a logically coherent manner.

The effective utilization of prompts is crucial in harnessing the capabilities of AI writing assistants, enabling users to tailor the AI’s responses to align with specific intents, themes, or subject matters. This enables the generation of content that is not only contextually coherent and relevant but also aligned with the user’s objectives and preferences in terms of style, tone, and content.

For instance, PromptBase 2023 is a notable tool in this domain, serving as an advanced natural language processing platform, designed to optimize human-machine interaction across various applications, including content generation and automated support, by leveraging state-of-the-art AI technologies to generate coherent and contextually relevant text responses [21]. This platform exemplifies the advancements in AI-driven tools that can support researchers in developing structured, coherent, and high-quality scientific papers. A variety of specialized prompts designed to streamline various facets of the scientific writing and research process are available through PromptBase. These prompts cater to an array of requirements, from generating scientific articles to assisting with academic research paper rewrites. It is worth noting that these tools are offered at varying price points, ensuring affordability and accessibility for researchers (prices currently range between $3.99 and $5.99).

AI can assist in drafting initial content, providing a starting point for authors, and reducing the time and effort required to produce text. By using ChatGPT, researchers can input a set of key points or an outline, and the tool can generate coherent, well-structured drafts based on these inputs, which can then be refined and edited by the human author. AI can provide instant feedback on various aspects of the text, such as grammar, style, tone, and coherence, aiding in the refinement of the paper. As an example of a simple AI tool, Grammarly, can analyse text and offer suggestions for improving grammar, punctuation, and style, helping authors polish their writing.

AI tools can assist in conducting preliminary research, summarizing articles, and extracting key information from academic papers, which can be particularly useful in literature review sections. For instance, AI-powered tools like Iris.ai can read and understand the context of scientific papers and help in extracting relevant information, aiding researchers in gathering and assimilating knowledge efficiently [22]. During the revision process, AI can assist in optimizing the text, suggesting improvements in word choice, sentence structure, and overall flow. Tools like ProWritingAid can provide comprehensive feedback on various elements of writing, enabling authors to enhance the clarity, coherence, and impact of their text [23]. The spectrum of advantages and challenges across different facets of the publication process are summarized in Table 1.

Interestingly, other tools such as WordTune integrate comprehensively. ChatGPT-4 or similar advanced models, equipped with a code interpreter functionality, can upload files, and execute custom statistical analysis scripts, enabling it to process and analyse datasets by applying various statistical methods as directed by the user.

Balancing innovation with learning: risks and benefits of using AI for (junior?) researchers in scientific writing

The use of AI tools for writing paper or proposal for grants could deeply also influence the education of young researchers and young professionals in laboratory medicine [24]. The classical learning pathway includes a long and specific training for identifying different parts of a paper (Introduction, Materials and Methods, Results, and Discussion) traditionally supervised by one or more experts, who are fundamental for indicating length, style, correctness of words, acronyms, citations. In particular, the education trail is important to define the accurate and solid connection between data and discussion and conclusions, which is the most important finding to be learnt by young professional. The use of AI tools upsets this traditional education procedure, reducing time needed to write, allowing the increase of written documents. The vulnerability of younger generations, increasingly dependent on AI assistance, poses a grave concern, potentially leading to a harmful cycle in which they lack the skills to independently validate and critically evaluate AI-generated outcomes, thus undermining the very foundations of scientific competence and integrity. This innovation is dangerous, inducing some problems, e.g., intellectual property infringement, possible plagiarism, relative freedom of researchers from colleagues and institutions and correspondent undefined or less defined responsibility of teams. Moreover, the medium/high semantic ability of the new tools, and correspondent low level of cognitive approach, could induce a general flattening of scientific debate: thus, the risk of a huge increase of papers with lower scientific level than the current one is real, possibly inducing a collapse of the whole system of scientific literature, already affected from many problems of sustainability [25]. In light of the numerous unanswered questions raised by AI, the process of formulating a reform agreement for research assessment was started in January 2022, aligning with the goals of organizations such as Coara (https://coara.eu/).

Additionally, the rise of open access and predatory journals intensifies these challenges [26]. With more possibilities for quick publication, especially when using AI-assisted writing, researchers face increased risks of compromising research credibility and quality.

Other issues are raising from ChatGPT, or similar generative AI systems based on large language models. AI models could return false information (also defined as hallucinations) [27]. Transformers, such as GPT models, can sometimes generate incorrect or nonsensical information, referred to as hallucinations. This is primarily because they generate responses based on patterns learned from training data, without an understanding of underlying facts or truth. This shortcoming can result in misinformation if users rely solely on the generated content without verification. Although the rate of hallucinations has been observed to be decreasing following the evolution of large language models [28], still this is the main problem of using these systems for any other purpose than “producing plausible text”. Another important issue is the varied understanding, since these models may exhibit proficiency in understanding complex topics but struggle with seemingly simpler concepts [26], a phenomenon that Dell’Acqua et al. have recently called “jagged AI frontier” [29, 30]. Additionally, the inherent inconsistencies in these systems can be misleading; they are engineered to generate user-pleasing responses, which can sometimes mask inaccuracies and lead users to unintentionally accept and propagate misunderstood or misrepresented information.

Transformers, such as those used in AI language models, can exhibit a degree of inconsistency by generating varying responses to the same input or request [31]. This phenomenon, noted by Wallach, creates significant concerns regarding the reliability and reproducibility of the information provided by such models [32]. Particularly in professional and academic environments, where consistency and accuracy of information are crucial, the variability in responses from these models can pose serious challenges, leading to potential complications in the interpretation and application of the generated content.

Given these considerations, it is imperative to exercise caution when relying solely on AI for data analysis, emphasizing the indispensable role of human oversight in the critical interpretation and validation of results to mitigate the risk of misinformed decisions based on potentially inconsistent AI-generated content.

Ethical implications of AI in scientific paper writing: ensuring originality and addressing plagiarism/guaranteeing authenticity: detecting and preventing plagiarism in the age of artificial intelligence

The integration of AI in content generation introduces intricate dilemmas surrounding authorship and contribution [33]. Distinguishing human intellectual input from AI-generated content is not only crucial but also complex, prompting critical deliberations about acknowledgment and the exactitude of AI’s participation. Intellectual property violation is therefore an important issue. When responding to similar queries from different users, AI models can generate identical or highly similar content [34]. This poses intellectual property concerns, particularly if multiple users utilize the generated content for publication or commercial purposes. The academic community is in urgent need of unequivocal guidelines and a unified consensus on the acknowledgment of AI’s contributions – whether it mandates authorship or an alternative form of recognition, and the methodology for attributing AI-generated work appropriately.

Responsibility attribution is indispensable when AI participates in content creation. The presence of inaccuracies, errors, or unforeseen consequences in AI-produced content demands a profound understanding of liability – raising questions on who should bear the responsibility, the human author, or the AI developers, especially in instances of misinformation or flawed data interpretation. The establishment of robust accountability frameworks is paramount to resolve potential disagreements and uphold the sanctity of scientific endeavours. Furthermore, AI’s role in scientific writing brings forth pronounced apprehensions about intellectual property rights. The complexity arises as AI lacks legal recognition and rights, rendering the ownership of AI-generated content ambiguous, potentially leading to disputes and obstructing scientific advancements. A refined legal framework is imperative to elucidate the ownership protocols of AI-created content and safeguard the rights of human authors, AI developers, and other stakeholders. Furthermore, AI models might have been trained on copyrighted content or documents containing personal data, resulting in potential legal issues. Users employing their output without proper verification and adaptation might therefore inadvertently commit offenses such as copyright infringement or General Data Protection Regulation (GDPR) non-compliance. AI’s learning from existing data can perpetuate and incorporate inherent biases, causing skewed results, misrepresentation, and the reinforcement of harmful stereotypes in the generated content. The rectification of these biases and the assurance of equal and accurate representation of varied perspectives are vital to maintaining the principles of equality and impartiality in scientific investigation.

Plagiarism concerns and detection

In the realm of AI-generated content, ensuring originality and combating plagiarism is of paramount importance. AI tools, imbued with advanced algorithms, have demonstrated unprecedented capabilities in detecting plagiarism, meticulously comparing content against a vast spectrum of existing works. This feature is instrumental in preserving the integrity of academic and scientific discourse, enabling the identification of even subtly paraphrased content, and ensuring the true essence of original works is neither misrepresented nor diluted [35].

However, as we leverage AI’s capabilities, the nuances of maintaining originality in AI-generated content emerge as considerable challenges. AI models, trained on expansive datasets compiled from pre-existing texts, may inadvertently reflect the stylistic and structural elements of their training data, potentially leading to unintentional reproductions of existing works. Given AI’s inherent design to generate content based on learned information, distinguishing between inspiration and imitation becomes a complex endeavour, necessitating rigorous scrutiny.

AI models can exhibit and perpetuate biases present in their training data, raising significant ethical concerns [36]. Moreover, many academic journals and institutions emphasize the importance of originality and academic integrity, strictly requiring authors to produce content without the aid of AI-generated support. The use of such systems can be viewed as a breach of ethical guidelines, potentially resulting in issues related to plagiarism and the authenticity of the work. Authors are typically required to disclose the methods and tools used in producing their work, and the undisclosed use of language generation tools could raise questions about the validity and originality of the research [37].

The resemblance of AI-generated content to existing works can potentially lead to intricate disputes over copyright infringement. As we navigate through the evolving landscapes of legal and ethical frameworks surrounding AI-generated content, there is a pressing need for clear and cohesive guidelines to address the multifaceted aspects of intellectual property rights, fair use, and attribution in AI’s contribution to writing.

To mitigate these inherent challenges, the development and implementation of advanced and reliable plagiarism detection tools, specifically tailored to analyse AI-generated content, are imperative. Such tools must proficiently differentiate between general knowledge, coincidental similarities, and blatant plagiarism. Alongside technological solutions, the establishment of ethical writing practices, stringent review processes, and heightened awareness regarding the responsible use of AI are pivotal in minimizing the risks associated with plagiarism in AI-generated content [38, 39].

In addressing these multifaceted challenges, the emphasis on ethical and responsible use of AI becomes increasingly critical. The collaboration between researchers, writers, editors, and AI developers is essential to formulate best practices, ensuring that AI-generated content adheres to the highest standards of originality, authenticity, and ethical integrity. This encompasses transparent disclosure of AI’s role in the writing process, proper attribution, and due diligence in citing sources [40, 41].

When AI becomes a co-creator of scientific contents, ensuring the authenticity and originality of papers emerges as a critical concern; therefore, rigorous exploration of methodologies to preserve the authenticity of such papers are needed. AI, drawing upon extensive datasets and learned patterns, has the capability to contribute significantly to the construction of scientific narratives; however, distinguishing the origin of contributions – whether human or artificial – becomes an essential endeavour. Establishing clear demarcations between human intellectual inputs and AI-generated content may be crucial to avoid the dilution of individual creativity and to maintain the essence of original thought within scientific discourse [42]. This involves developing robust frameworks and guidelines that facilitate transparent attribution of contributions, ensuring that the essence of human creativity is not overshadowed by the inputs of AI [43]. This requires the implementation of systematic processes to scrutinize and validate the uniqueness and originality of AI-enhanced content, preventing inadvertent reproductions of existing works and preserving the integrity of scientific literature. Balancing the innovative contributions of AI with the imperatives of maintaining authentic and original scientific discourse is pivotal for the harmonious advancement of AI-integrated scientific writing. The pursuit of this balance will shape the evolution of scientific writing, impacting the way knowledge is constructed, disseminated, and perceived in the AI-augmented future of scientific research.

Advocating for enhanced recognition of reviewer contributions

Scientific literature is classically based on voluntary contribution of scientists and experts to perform peer review of proposed papers. The integration of AI in scientific writing accentuates the pivotal role of the human reviewer in maintaining the integrity and originality of scientific discourse. The potential for AI tools to rephrase content brings forth unprecedented challenges in plagiarism detection, emphasizing that discerning subtly altered text and maintaining authenticity often require human expertise [44]. Only a seasoned and astute human reviewer can meticulously identify instances of plagiarism even when texts have been extensively rephrased, validating the authenticity and uniqueness of the content, and ensuring the preservation of intellectual property. It may be worth highlighting that this activity will demand more and more time, given the shrinking availability and time constraints professionals are increasingly experiencing.

The huge, geometrical increase of number of journals, the increase of categories of journals, more and more devoted to specific topics of science, and the increase of number of papers, due to the mandatory claim to publish for obtaining funds and for building and maintaining careers and reputation of scientists and of institutions and academies, induces a shortage of available reviewers. The increase of invitations to review induces a decrease of acceptances. This current problem could be an additional source the adoption of intelligent augmentation programs, especially for some type of papers, like reviews and meta-analyses [45]. In this scenario, the role of a reviewer ascends to paramount importance, acting as the gatekeeper of scientific integrity [46]. However, the current academic community often overlooks the invaluable contributions of reviewers. Typically, reviewers operate in anonymity, their critical contributions go uncompensated by journals, and there is no established index, akin to the H-index for publications, to quantify their contributions and impact on the academic community.

This lack of recognition and rewarding of the reviewer’s work underscores a crucial need for evolution and reform within the scientific community. It is imperative for the academic community to evolve and institutionalize mechanisms to recognize, value, and reward the contributions of reviewers in upholding the standards of scientific writing. There should be tangible acknowledgments and incentives, perhaps through the introduction of quantifiable metrics or indices reflecting their contributions to the enhancement of scientific literature’s quality and reliability.

Establishing such reforms would not only elevate the status of the reviewer within the academic community but also fortify the foundational values of authenticity and originality in scientific research. As we navigate through the complexities introduced by AI in scientific writing, enhancing the recognition and support for the human experts tasked with safeguarding the quality and integrity of scientific output becomes indispensable. Their discerning evaluations act as the final bastion against the potential dilution of scientific authenticity, ensuring the responsible and ethical integration of AI in the realm of scientific exploration. This evolution in recognizing and valuing the role of the reviewer is essential to maintain a balanced hybridization between human intellect and artificial intelligence, ultimately fostering a more robust, equitable, and enlightened scientific community in the AI-augmented future.

Summary and outlook

Emerging, AI-driven technologies are increasingly accessible, suggesting their potential utility in the future in various potential scenarios. Despite these tools have demonstrated significant utility in scientific writing and in paper peer-review, there remains an insufficient consensus regarding their actual advantages or disadvantages (Table 1). As technological tools, they can be effectively employed in certain instances, such as in grammar, punctuation, or language correction. However, in other cases, their use teeter on the edge of plagiarism, copyrights infringement, and perhaps most importantly, it might curtail the role of academia in educating the next generation of students and transferring skills to emerging professionals. As disruptive technologies, the adoption of these tools cannot be prevented; in fact, several AI-based writing tools have been commercialized for a considerable period and have gained extensive use among authors. Nevertheless, in recognition of the genuine human capacity for creative problem-solving and critical thinking, indispensable in scientific endeavors, we must approach the integration of AI in study design with discernment, ensuring that it serves as an adjunct to, rather than a replacement for, the nuanced and innovative contributions of human intellect.

It is our hope that cumulative efforts can harness the positive potentials of AI technologies, while strongly discouraging any misuse or inappropriate applications of them.


Corresponding author: Anna Carobene, Laboratory Medicine, IRCCS San Raffaele Scientific Institute, Via Olgettina, 60, 20132, Milan, Italy, Phone: +39 02 26432850, E-mail:

Acknowledgments

We acknowledge the support of ChatGPT-4 for its assistance in organizing and articulating ideas during the paper preparation. While the intellectual contributions and conceptual developments are entirely those of the authors, ChatGPT’s role in streamlining the writing process is duly recognized.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: Anna Carobene, initiated the study’s concept, authored the preliminary draft, offered crucial insights, and played a pivotal role in shaping the final paper. Andrea Padoan, collaborated in shaping the study’s concept, penned the initial draft, provided vital commentary, and had a hand in the finalization of the paper. Federico Cabitza, as an authority in human-computer interaction, rendered critical expertise and was instrumental in refining the final paper. Giuseppe Banfi, lending expertise in ethics, furnished critical insights and oversaw the research progression from start to finish. Mario Plebani, from an editor’s perspective, bestowed valuable feedback and played a supervisory role throughout the research journey. All authors have accepted responsibility for the entire content of this paper and approved its submission.

  4. Competing interests: The authors state no conflict of interest.

  5. Research funding: None declared.

References

1. Dwivedi, YK, Kshetri, N, Hughes, L, Slade, EL, Jeyaraj, A, Kar, AK, et al.. So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities challenges and implications of generative conversational AI for research, practice and policy. Int J Inf Manag 2023;71:102642. https://doi.org/10.1016%2Fj.ijinfomgt.2023.102642.10.1016/j.ijinfomgt.2023.102642Search in Google Scholar

2. Nature. AI will transform science now researchers must tame it. Nature 2023;621:658.10.1038/d41586-023-02988-6Search in Google Scholar PubMed

3. Bianchini, S, Müller, M, Pelletier, P. Artificial intelligence in science: an emerging general method of invention. Res Pol 2022;51:104604. https://doi.org/10.1016%2Fj.respol.2022.104604.10.1016/j.respol.2022.104604Search in Google Scholar

4. Snyder, C, Zaydman, MA, Chong, T, Baron, J, Chen, JH, Jackson, B. Generative artificial intelligence: more of the same or off the control chart? Clin Chem 2023;69:1101–6. https://doi.org/10.1093/clinchem/hvad129.Search in Google Scholar PubMed

5. Woodnutt, S, Allen, C, Snowden, J, Flynn, M, Hall, S, Libberton, P, et al.. Could artificial intelligence write mental health nursing care plans? J Psychiatr Ment Health Nurs 2023;1–8. https://doi.org/10.1111/jpm.12965. https://onlinelibrary.wiley.com/doi/full/10.1111/jpm.12965.Search in Google Scholar PubMed

6. Benichou, L. The role of using ChatGPT AI in writing medical scientific articles. J Stomatol Oral Maxillofac Surg 2023;124:101456. https://doi.org/10.1016%2Fj.jormas.2023.101456.10.1016/j.jormas.2023.101456Search in Google Scholar PubMed

7. Curtis, N, ChatGPT. To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr Infect Dis J 2023;42:275. https://doi.org/10.1097%2Finf.0000000000003852.10.1097/INF.0000000000003852Search in Google Scholar PubMed

8. King, MR, ChatGPT. A conversation on artificial intelligence chatbots, and plagiarism in higher education. Cell Mol Bioeng 2023;16:1–2. https://doi.org/10.1007%2Fs12195-022-00754-8.10.1007/s12195-022-00754-8Search in Google Scholar PubMed PubMed Central

9. ChatGPT Generative Pre-Trained Transformer, Zhavoronkov, A. Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience 2022;9:82. https://doi.org/10.18632%2Foncoscience.571.10.18632/oncoscience.571Search in Google Scholar PubMed PubMed Central

10. Salvagno, M, Taccone, FS, Gerli, AG. Can artificial intelligence help for scientific writing? Crit Care 2023;27:75. https://doi.org/10.1186%2Fs13054-023-04390-0.Search in Google Scholar

11. Salvagno, M, Taccone, FS, Gerli, AG. Correction to: can artificial intelligence help for scientific writing? Crit Care 2023;27:99. https://doi.org/10.1186%2Fs13054-023-04390-0.Search in Google Scholar

12. O’Connor, S. Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ Pract 2023;66:103537. https://doi.org/10.1016/j.nepr.2022.103537.Search in Google Scholar PubMed

13. O’Connor, S. Corrigendum to open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ Pract 2023;66:103572. https://doi.org/10.1016%2Fj.nepr.2023.103572.10.1016/j.nepr.2023.103572Search in Google Scholar PubMed

14. Stokel-Walker, C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023;613:620. https://doi.org/10.1038%2Fd41586-023-00107-z.10.1038/d41586-023-00107-zSearch in Google Scholar PubMed

15. Thorp, HH. ChatGPT is fun but not an author. Science 2023;379:313. https://doi.org/10.1126%2Fscience.adg7879.10.1126/science.adg7879Search in Google Scholar PubMed

16. Flanagin, A, Bibbins-Domingo, K, Berkwits, M, Christiansen, SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA 2023;329:637. https://doi.org/10.1001%2Fjama.2023.1344.10.1001/jama.2023.1344Search in Google Scholar PubMed

17. Plebani, M. ChatGPT: angel or demond? Critical thinking is still needed. Clin Chem Lab Med 2023;61:1131–2. https://doi.org/10.1515/cclm-2023-0387.Search in Google Scholar PubMed

18. Luo, R, Sun, L, Xia, Y, Qin, T, Zhang, S, Poon, H, et al.. BioGPT: generative pre-trained transformer for biomedical text generation and mining. Briefings Bioinf 2022;23:bbac409. https://doi.org/10.1093%2Fbib%2Fbbac409.10.1093/bib/bbac409Search in Google Scholar PubMed

19. Parasuraman, R, Manzey, DH. Complacency and bias in human use of automation: an attentional integration. Hum Factors 2010;52:381. https://doi.org/10.1177%2F0018720810376055.10.1177/0018720810376055Search in Google Scholar PubMed

20. Thirunavukarasu, AJ, Ting, DSJ, Elangovan, K, Gutierrez, L, Tan, TF, Ting, DSW. Large language models in medicine. Nat Med 2023;29:1930. https://doi.org/10.1038/s41591-023-02448-8.Search in Google Scholar PubMed

21. PromptBase. Prompt Marketplace: Midjourney, ChatGPT, DALL·E, Stable Diffusion & more. Prompt Markertplace. https://promptbase.com/ [Accessed 5 Oct 2023].Search in Google Scholar

22. Iris.AI. Your researcher workspace – leading AI for your research challenge. https://iris.ai/ [Accessed 5 Oct 2023].Search in Google Scholar

23. ProfWritingAid. Great writing made easy with ProWritingAid. https://prowritingaid.com/ [Accessed 5 Oct 2023].Search in Google Scholar

24. Aubignat, M, Diab, E. Artificial intelligence and ChatGPT between worst enemy and best friend: the two faces of a revolution and its impact on science and medical schools. Rev Neurol 2023;179:520–2. https://doi.org/10.1016/j.neurol.2023.03.004.10.1016/j.neurol.2023.03.004Search in Google Scholar PubMed

25. Ariyaratne, S, Iyengar, KP, Nischal, N, Chitti, BN, Botchu, R. A comparison of ChatGPT-generated articles with human-written articles. Skeletal Radiol 2023;52:1755. https://doi.org/10.1007/s00256-023-04340-5.Search in Google Scholar PubMed

26. Cukier, S, Helal, L, Rice, DB, Pupkaite, J, Ahmadzai, N, Wilson, M, et al.. Checklists to detect potential predatory biomedical journals: a systematic review. BMC Med 2020;18:104. https://doi.org/10.1186%2Fs12916-020-01566-1.10.1186/s12916-020-01566-1Search in Google Scholar PubMed PubMed Central

27. Lee, M. A mathematical investigation of hallucination and creativity in GPT models. Mathematics 2023;11:2320. https://doi.org/10.3390%2Fmath11102320.10.3390/math11102320Search in Google Scholar

28. Ali, R, Tang, OY, Connolly, ID, Fridley, JS, Shin, JH, Zadnik Sullivan, PL, et al.. Performance of ChatGPT, GPT-4, and Google Bard on a neurosurgery oral boards preparation question bank. Neurosurgery 2023;93:1090–8. https://doi.org/10.1227/neu.0000000000002551.Search in Google Scholar PubMed

29. Bender, EM, Gebru, T, McMillan-Major, A, Shmitchell, S. On the dangers of stochastic parrots. In: Proceedings of the 2021 ACM conference on fairness accountability, and transparency. FAccT ’21; 2021, March:610–23 pp.10.1145/3442188.3445922Search in Google Scholar

30. Dell’Acqua, F, McFowland, E, Mollick, ER, Lifshitz-Assaf, H, Kellogg, K, Rajendran, S, et al. Navigating the jagged technological frontier: field experimental evidence of the effects of AI on knowledge worker productivity and quality 2023. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321.10.2139/ssrn.4573321Search in Google Scholar

31. Cadamuro, J, Cabitza, F, Debeljak, Z, De Bruyne, S, Frans, G, Perez, SM, et al.. Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI). Clin Chem Lab Med 2023;61:1158–66. https://doi.org/10.1515/cclm-2023-0355.Search in Google Scholar PubMed

32. Wallach, H. Computational social science ≠computer science + social data. Commun ACM 2018;61:42. https://doi.org/10.1145%2F3132698.10.1145/3132698Search in Google Scholar

33. Koo, M. The importance of proper use of ChatGPT in medical writing. Radiology 2023;307:e230312. https://doi.org/10.1148%2Fradiol.230312.10.1148/radiol.230312Search in Google Scholar PubMed

34. Liu, Y, Ott, M, Goyal, N, Du, J, Mandar, J, Danqui, C, et al. RoBERTa: a robustly optimized BERT pretraining approach. In: Proceedings of ICLR 2020 conference. ICRL'21. 2020. Available from: https://openreview.net/forum?id=SyxS0T4tvS.Search in Google Scholar

35. Ramesh, RN, Maheshkumar, BL, Namrata, CM. A review on plagiarism detection tools. Int J Comput Appl 2015;125:16. https://doi.org/10.5120%2Fijca2015906113.10.5120/ijca2015906113Search in Google Scholar

36. Gebru, T, Morgenstern, J, Vecchione, B, Wortman, J, Vaughan, JW, Wallach, H, et al.. Datasheets for datasets. Commun ACM 2021;64:86–92. https://doi.org/10.1145/3458723.Search in Google Scholar

37. Qasem, F. ChatGPT in scientific and academic research: future fears and reassurances. Emerald Insight 2023;40:3. https://doi.org/10.1108/LHTN-03-2023-0043.Search in Google Scholar

38. Park, J-Y. Could ChatGPT help you to write your next scientific paper? concerns on research ethics related to usage of artificial intelligence tools. J Korean Assoc Oral Maxillofac Surg 2023;49:105. https://doi.org/10.5125%2Fjkaoms.2023.49.3.105.10.5125/jkaoms.2023.49.3.105Search in Google Scholar PubMed PubMed Central

39. Kleebayoon, A, Wiwanitkit, V. Artificial intelligence chatbots, plagiarism and basic honesty: comment. Cell Mol Bioeng 2023;16:173. https://doi.org/10.1007%2Fs12195-023-00759-x.10.1007/s12195-023-00759-xSearch in Google Scholar PubMed PubMed Central

40. Gaggioli, A. Ethics: disclose use of AI in scientific manuscripts. Nature 2023;614:413. https://doi.org/10.1038%2Fd41586-023-00381-x.10.1038/d41586-023-00381-xSearch in Google Scholar PubMed

41. Nature. Tools such as ChatGPT threaten transparent science: here are our ground rules for their use. Nature 2023;613:612. https://doi.org/10.1038%2Fd41586-023-00191-1.10.1038/d41586-023-00191-1Search in Google Scholar PubMed

42. Doyal, AS, Sender, D, Nanda, M, Serrano, RA. Chat GPT and artificial intelligence in medical writing: concerns and ethical considerations. Cureus 2023;15:e43292. https://doi.org/10.7759%2Fcureus.43292.10.7759/cureus.43292Search in Google Scholar PubMed PubMed Central

43. Koga, S. The integration of large language models such as ChatGPT in scientific writing: harnessing potential and addressing pitfalls. Korean J Radiol 2023;24:924. https://doi.org/10.3348%2Fkjr.2023.0738.10.3348/kjr.2023.0738Search in Google Scholar PubMed PubMed Central

44. Else, H. Abstracts written by ChatGPT fool scientists. Nature 2023;613:423. https://doi.org/10.1038%2Fd41586-023-00056-7.10.1038/d41586-023-00056-7Search in Google Scholar PubMed

45. Huang, J, Tan, M. The role of ChatGPT in scientific communication: writing better scientific review articles. Am J Cancer Res 2023;13:1148–54. https://europepmc.org/article/MED/37168339.Search in Google Scholar

46. Zheng, H, Zhan, H. ChatGPT in scientific writing: a cautionary tale. Am J Med 2023;136:725. https://doi.org/10.1016%2Fj.amjmed.2023.02.011.10.1016/j.amjmed.2023.02.011Search in Google Scholar PubMed

Received: 2023-10-12
Accepted: 2023-11-13
Published Online: 2023-11-30
Published in Print: 2024-04-25

© 2023 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 15.5.2024 from https://www.degruyter.com/document/doi/10.1515/cclm-2023-1136/html
Scroll to top button