Editorial Policy on the Use of Artificial Intelligence (AI)

1. Framework and Principles

Onco.News Journal recognizes that Artificial Intelligence (AI) and tools based on machine learning algorithms (such as, for example, ChatGPT, Gemini, Copilot, DeepL Write, Grammarly, among others) are increasingly present in the processes of research, writing, review, and scientific editing.

The responsible use of these technologies must respect the principles of:

  • Scientific integrity – research must be transparent, traceable, and free from manipulation, fabrication and/or falsification of data.
  • Transparency – the use of AI must be clearly declared.
  • Human responsibility – authors, reviewers, and editors are responsible for all scientific content.
  • Confidentiality – submitted articles must not be exposed to external systems without authorization.

 

This policy follows the recommendations of COPE (Committee on Publication Ethics) and ICMJE (International Committee of Medical Journal Editors).

2. Authors

2.1. Authorship

  • AI tools cannot be identified as authors or co-authors of submitted articles.
  • Only individuals who meet the ICMJE authorship criteria (substantial contribution, drafting or critical revision, final approval, and responsibility for the work) can be considered authors.

2.2. Permitted use of AI

Authors may use AI in a limited and responsible manner, for tasks such as:

  • writing support (grammatical improvement, clarity, style, or translation),
  • preliminary organization of bibliography (with human validation),
  • synthesis of already published literature.

2.3. Not permitted use of AI

It is not permitted to use AI for:

  • generate or manipulate research data,
  • create figures, tables, or results that simulate real data,
  • fabricate or invent references,
  • replace critical analysis and human interpretation.

2.4. Mandatory Declaration

Any use of AI must be explicitly declared in the article, preferably in the Methods section (when related to analysis) or in the Acknowledgments section (when related to writing/editing). 

Example of declaration:

"This article used the tool [name of AI, version] exclusively for [specific function, e.g.: support for language revision/translation]. All scientific content and interpretations are the responsibility of the authors." 

Omission of this information may be considered scientific misconduct.

 

3. Reviewers

3.1. Confidentiality

  • Articles submitted for peer review are confidential documents 
  • Reviewers may not input parts or the entirety of the article into external AI tools, as this violates confidentiality and copyright.

 

3.2. Permitted use of AI

The reviewer may, in a limited way, use AI for:

  • support in the clarity or translation of their review, provided that the content of the article is not entered into the tool

 

3.3. Mandatory Declaration

When submitting the review, the reviewer must confirm (via checkbox in the editorial system) that:

  • they maintained the confidentiality of the article,
  • they did not use external AI to process or analyze the content of the article.

 

4. Editors

4.1. Permitted use of AI

The editorial team may use AI for support tasks, such as:

  • initial compliance screening (e.g.: ethics checklists, reference analysis),
  • support in identifying potential reviewers (through open databases),
  • production of simplified summaries for scientific dissemination.

 

4.2. Not permitted use of AI

  • AI cannot replace human editorial judgment.
  • Decisions regarding acceptance, rejection, or revision of articles are always the responsibility of the editors.

 

4.3. Transparency

Any use of AI in internal editorial processes must be documented and, when relevant, communicated transparently.

 

5. Final Responsibility

The use of AI does not transfer or diminish the ethical and scientific responsibility of authors, reviewers, or editors.

  • All content generated or assisted by AI must be validated by humans before being submitted, published, or accepted.
  • Onco.News Journal reserves the right to:
    • request additional clarification regarding the use of AI,
    • correct, reject, or retract articles in cases of improper or omitted use of AI,
    • follow COPE protocols in cases of suspected scientific misconduct.

6. Recommended Best Practices

  • Full transparency: always declare any use of AI.
  • Human validation: never accept AI-generated results without critical verification.
  • Responsible use: AI should only complement, never replace, scientific authorship.
  • Confidentiality: articles and reviews must not be exposed to external platforms without consent.

This policy will be reviewed periodically to reflect international recommendations on scientific publishing ethics and the evolution of AI technologies.