Evolving policies on AI-generated figures at Nature, Science, Cell β what's allowed, what's banned, and how to cite AI tools properly in 2026.
SciFig Team
Scientific Illustration Experts
You have spent three weeks on a set of experiments. The data is solid. The story is clear. Now you need figures β and you have heard that AI tools can produce publication-quality scientific illustrations in minutes. There is just one problem: you do not know whether your target journal will accept them, flag them, or penalize you for using them at all.
The fear of retraction over an AI policy violation is real. Journals have issued corrections and expressions of concern over undisclosed AI use, and the reputational stakes of getting this wrong are not trivial. But here is the thing: the policies are more navigable than the discourse around them suggests. The major publishers have staked out reasonably clear positions, and the core requirements β disclosure, accuracy, no fabrication β are consistent enough across the landscape that researchers can build a reliable mental model.
This SciFig guide covers the current stances of Nature Publishing Group, Science (AAAS), and Cell Press, identifies what is universally off-limits, and gives you practical templates for citing AI tools correctly in your manuscripts.
Note: Journal policies evolve. Always verify the current guidelines directly on the journal's author information pages before submitting. The positions described here reflect publicly available guidance as of early 2026.
Understanding the Distinction β AI-Generated vs. AI-Assisted
Before reviewing individual publisher policies, it helps to understand the distinction journals themselves draw most sharply: the difference between AI tools that generate imagery from scratch and those that assist with existing imagery.
AI-generated figures are illustrations produced by a generative model based on a prompt or description, where no pre-existing image was the direct input. A researcher types "show a CRISPR-Cas9 complex cutting a double-stranded DNA molecule" into a text-to-figure tool and receives a complete illustration. The AI created the visual content β it did not merely process something that already existed.
AI-assisted figures involve using AI tooling to enhance, reformat, clean up, or recompose an existing image. Examples include using AI-powered tools to upscale resolution, remove background noise from microscopy images, adjust white balance, or reorganize panel layouts. The content β what the scientific figure shows β originated with the researcher; the AI only helped with presentation or technical quality.
This distinction matters because journals scrutinize these categories differently. AI assistance for non-substantive technical improvement is broadly tolerated or accepted, often with no disclosure requirement beyond a methods note. AI generation of scientific content from scratch is where requirements around disclosure, accuracy, and appropriate use become more stringent.
A related distinction: data visualization (charts, graphs, plots) is a third category that falls mostly outside AI-figure concerns. If your bar graph is produced by R or Python from real experimental data, the fact that a language model helped you write the plotting code is generally immaterial. The scientific figure represents actual data, not AI-generated imagery.
The policies below focus primarily on conceptual and illustrative figures β diagrams, schematics, pathway illustrations, experimental design overviews β rather than data plots.
Nature Publishing Group Policies
Nature Publishing Group (NPG), which publishes Nature, Nature Medicine, Nature Methods, Nature Communications, and dozens of other journals, established AI content policies in early 2023 and has refined them since. Their position on AI-generated figures is permissive with conditions, not prohibitive.
Nature policy guide
The core requirements across NPG titles are:
1. Disclosure is mandatory. Any figure that was substantively produced using a generative AI tool must be disclosed. This disclosure belongs in the Methods section, not buried in an acknowledgments footnote. Nature-family journals expect a clear statement identifying the tool and describing how it was used.
2. Authors bear full responsibility for accuracy. NPG policy is explicit: using an AI tool does not transfer responsibility for the scientific figure's scientific accuracy. If an AI-generated diagram misrepresents a biological mechanism, the authors are responsible. This means researchers must independently verify that AI-generated illustrations accurately reflect the underlying science β and are prepared to defend them in peer review.
3. AI cannot be listed as an author. This is consistent across all publishers. AI tools do not meet authorship criteria (they cannot take responsibility for the work, respond to correspondence, or provide consent). Listing an AI tool as an author is grounds for rejection.
4. Images cannot be manipulated to misrepresent data. This rule predates AI but applies equally to AI tools. Using generative AI to alter or enhance experimental images (microscopy, gels, histology) in ways that change what the data shows is prohibited.
Within these guardrails, NPG journals will accept AI-generated illustrative figures β pathway diagrams, conceptual schematics, experimental design illustrations β provided they are accurate, disclosed, and used appropriately for illustrative rather than evidential purposes. The policy is about transparency and accuracy, not about banning the technology.
Science (AAAS) Policies
Science magazine and its family of journals (Science Advances, Science Translational Medicine, Science Immunology) took a more cautious initial stance on AI-generated content, then shifted toward a disclosure-centered framework consistent with Nature's approach.
The AAAS position as of 2026:
Generative AI is permitted for illustrative figures with mandatory disclosure. Science now requires authors to include a statement in the methods or acknowledgments section describing any use of AI tools in figure preparation. The statement should name the specific tool and describe the nature of its use.
AI-generated text in manuscripts requires separate disclosure. Science is notably strict about AI assistance in manuscript writing β it must be disclosed and clearly labeled. This is a separate question from figures, but worth noting if you use AI tools for any component of a submission.
Experimental evidence figures may not use generative AI. This is the key restriction for Science authors: figures that are presented as direct evidence of experimental results β images of specimens, gels, cells, tissues, structural data β must represent what was actually observed. Generative AI may not be used to create, alter, or enhance these figures in ways that change their evidential content. This does not prevent AI-assisted noise reduction or resolution enhancement of real experimental images, provided such processing is disclosed and does not alter scientific conclusions.
The peer review standard still applies. Reviewers at Science are expected to evaluate whether figures β AI-generated or not β accurately represent the research being reported. Disclosed AI use does not insulate a scientific figure from scientific critique.
For researchers targeting Science or its family journals, the practical takeaway is: use AI tools for conceptual and illustrative figures, disclose fully, and keep experimental evidence figures free from generative AI content.
Cell Press Policies
Cell Press, the Elsevier-owned publisher of Cell, Molecular Cell, Cell Reports, and related journals, operates under both Elsevier's broader AI policy framework and Cell Press-specific editorial guidelines.
The current Cell Press position reflects Elsevier's three-part framework:
Transparency. Authors must declare AI tool use in a dedicated section of the manuscript, typically following the acknowledgments. The declaration should specify the tool, the version or access date, and the purpose of use.
Responsibility. Authors remain fully accountable for the content of AI-generated or AI-assisted figures. Cell Press journals expect authors to be able to explain and defend any figure regardless of how it was produced.
No AI authorship. Consistent with industry norms, Cell Press journals do not permit AI tools to be listed as authors or co-authors.
Cell Press journals apply particular scrutiny to image integrity. The journals use automated image analysis tools to screen submissions for inappropriate manipulation. While these tools were developed with traditional image fraud in mind, they can also flag inconsistencies introduced by generative AI. Authors should ensure that AI-generated figures do not inadvertently create visual artifacts that trigger integrity screening.
For Cell-family journals specifically: biomedical and cellular illustration figures generated by AI are acceptable with full disclosure. The journals have published papers featuring AI-generated schematic figures in recent issues. The key is that the scientific figures must accurately represent the biology described in the paper, and their AI origin must be clearly stated.
See AI Scientific Figure Generation in Action
Watch how researchers create publication-ready scientific figures from text descriptions.
Across Nature Publishing Group, Science, and Cell Press, the policies share a common structure with meaningful variations in emphasis.
Journal policy comparison
Policy Element
Nature Publishing Group
Science (AAAS)
Cell Press
AI-generated illustrative figures
Permitted with disclosure
Permitted with disclosure
Permitted with disclosure
AI enhancement of experimental images
Prohibited if alters conclusions
Prohibited if alters scientific content
Prohibited; image integrity screening applied
Disclosure location
Methods section
Methods or Acknowledgments
Dedicated AI declaration section
AI authorship
Not permitted
Not permitted
Not permitted
Author responsibility for accuracy
Explicitly stated
Explicitly stated
Explicitly stated
Generative AI for manuscript text
Disclosure required
Disclosure required; stricter scrutiny
Disclosure required
Beyond these three publishers, several other major journals have taken similar positions. PLOS ONE permits AI-generated figures with disclosure and accuracy attestation. The Lancet family requires disclosure and prohibits AI generation of clinical images presented as evidence. eLife has a permissive but disclosure-first policy.
The emerging industry norm is clear: disclosure and accuracy are the universal requirements. No major journal has adopted a blanket ban on AI-generated illustrative figures, though several have restricted AI generation specifically in the context of figures presented as direct experimental evidence. For a practical comparison of AI-native vs. traditional scientific illustration tools that meet these policy requirements, see our 2026 illustration tools guide.
Smaller society journals and specialty publications are less standardized. Some have explicit AI policies, many do not. When a policy is absent, the default expectation at most journals is that images represent what they purport to represent β which means using AI to create content presented as experimental evidence remains problematic regardless of whether a policy explicitly addresses it.
The Red Line β What Is Still Banned
Despite the permissive trend toward disclosure-based acceptance, several categories of AI use in figures remain universally prohibited across responsible publishing.
Fabrication of experimental evidence. Using a generative AI tool to create an image β a gel band, a microscopy field, a histological section β that is then presented as an actual experimental result is scientific fraud. It does not matter that the image was AI-generated rather than hand-drawn; the deception is identical. This is not a gray area. The retraction and potential misconduct consequences are severe.
Undisclosed AI use. Using an AI tool to generate figures and not disclosing it is a policy violation at every major publisher that has issued AI guidelines. Even if the scientific figures are scientifically accurate, failing to disclose violates the transparency requirements that underpin research integrity.
Misleading data visualization. Using AI (or any tool) to create visual representations that misrepresent statistical results, exaggerate effect sizes, or present data in ways that lead to incorrect conclusions is prohibited. This applies whether the distortion is intentional or inadvertent β researchers are responsible for ensuring their visualizations accurately represent their data.
Enhancement of experimental images that changes scientific conclusions. Noise reduction, contrast adjustment, and resolution enhancement of experimental images are acceptable when applied uniformly and disclosed. Using AI to selectively enhance specific features, remove inconvenient artifacts, or make ambiguous results appear clearer than they are crosses into manipulation.
Generation of figures for fraudulent purposes. Using AI to generate figures for fabricated studies, duplicate publications, or papers where the described experiments were not conducted is covered by existing misconduct frameworks and is treated identically to traditional fabrication.
The common thread: the prohibition is on deception and misrepresentation, not on AI technology itself. Journals have largely concluded that AI is a tool, and like any tool it can be used honestly or dishonestly. The policies reflect that distinction.
How to Cite AI Tools in Your Paper
Getting the disclosure right is as important as using the tools honestly. Here are practical templates for the two most common locations: the Methods section and the Acknowledgments section.
Methods Section β General Template
AI-generated figures in this manuscript were produced using [Tool Name, version/access date]. Figures [X, Y] are AI-generated conceptual illustrations. All AI-generated figures were reviewed by the authors for scientific accuracy and are intended to illustrate [mechanism/process/experimental design], not to represent experimental data.
Methods Section β Specific Example
The schematic diagram in Figure 3 was generated using SciFig (scifig.ai, accessed January 2026), an AI-based scientific illustration platform. The scientific figure was produced from a natural language description of the pathway architecture and was reviewed by all co-authors to confirm accuracy. The scientific figure is illustrative and does not represent experimental data.
Acknowledgments Section β Short Form
AI tools: [Tool Name] was used for figure generation. [Tool Name] was not involved in study design, data collection, analysis, or interpretation, and does not qualify for authorship.
Acknowledgments Section β Elsevier/Cell Press Format
Declaration of generative AI and AI-assisted technologies in the writing process: During the preparation of this work, the authors used [Tool Name] for figure generation. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
A few practical notes on citations:
Never list the AI tool as an author. Use it in the methods or acknowledgments as a tool, not a contributor.
Include the version number or access date when available. AI tools update frequently and the version matters for reproducibility documentation. SciFig's export metadata includes the model version and generation timestamp for this purpose.
Be specific about which figures were AI-generated. "Figures 1A and 3C were AI-generated" is better than a vague general statement.
State what the scientific figures represent β clarifying that they are conceptual illustrations rather than experimental evidence removes ambiguity.
Check journal-specific format requirements. Some journals have standardized the AI disclosure statement format; use their template if one is provided.
Create Scientific Figures Now
Describe your scientific figure in natural language β get publication-ready illustrations in minutes.
Journal AI policies are actively evolving. The positions described in this guide reflect publicly available guidelines as of early 2026. Before submitting any manuscript, verify the current AI policy on the specific journal's author guidelines page β do not rely on this or any secondary source as a substitute for the journal's own documentation. When in doubt, contact the editorial office directly.