Alfred Nobel University Journal of Philology  

ARTIFICIAL INTELLIGENCE (AI) USE POLICY

Purpose

This policy gives clear, implementable rules for authors, reviewers, and editors about the responsible use and disclosure of artificial intelligence (AI) and AI-assisted technologies in manuscripts submitted to this journal. It is informed by recent international publisher and professional guidance (International Committee of Medical Journal Editors (ICMJE), Committee on Publication Ethics (COPE), Springer Nature, Elsevier, American Association for the Advancement of Science (Science) and others and is intended to support transparency, research integrity, reproducibility and protection of confidential peer-review processes. 

1. Scope and definitions

Scope. Applies to all submitted content (text, figures, tables, code, datasets, multimedia, supplementary files) where authors used AI or AI-assisted technologies at any stage of the research, analysis, or writing workflow. 

Definitions. For this policy, “AI tools” and “AI-assisted technologies” include large language models (LLMs), generative image or audio models, automated coding assistants, and domain-specific AI systems used to generate, transform, summarize, translate, analyze, or check content. Examples: ChatGPT, GPT-4/5/Gemini/Claude, image generators, automated statistical/analysis pipelines. 

2. Authorship and responsibility

No AI as author. AI tools must not be listed as authors or co-authors because they cannot take responsibility, declare conflicts of interest, or be accountable for work. Humans alone must meet the journal’s authorship criteria (conception, design, analysis, drafting/revising, final approval, accountability). 

Human accountability. Authors remain fully responsible for the integrity, accuracy, originality, and ethical compliance of all submitted material — including any content generated or substantially shaped by AI. 

3. Mandatory disclosure (authors)

Authors must declare any use of AI/AI-assistance at submission and include a short, specific statement in the manuscript (e.g., before or in the Methods or Acknowledgements). Minimum required details:

Tool name(s)(include vendor/model and version where known). 

What it was used for (e.g., drafting text, editing language, generating figures, extracting data, statistical modeling, code generation). 

Scope and extent (e.g., “used to revise English phrasing of Results and Methods only” or “used to generate 2 supplementary figures; underlying data were curated by authors”). 

Verification steps authors took to confirm correctness, provenance, and lack of plagiarism (e.g., human verification of references, re-analysis of AI outputs, checks for fabricated citations). 

Suggested short template to include in manuscript (editable):

AI Statement: We used [Tool name, version] to [brief description of use]. All AI-generated text/figures/analyses were reviewed and edited by the authors, who take full responsibility for the content and confirm that no AI tool meets the criteria for authorship. 

4. Research methods, data and reproducibility

— Describe AI in Methods. If AI tools were used for data analysis, modeling, or generating results (not only for drafting text), authors must describe the methods, training data (if relevant and licensable), parameters, and versions to the extent possible so analyses are reproducible. If proprietary/closed tools were used, authors should provide enough methodological detail and, where possible, code/data to allow independent verification. 

— Data provenance and synthetic data. If synthetic data or AI-augmented data were used, authors must state how synthetic data were generated, any downstream use, and deposit scripts or metadata in a public repository if ethically and legally permissible. 

5. Text, figures, and multimedia

— Text. Use of AI to improve grammar, clarity, or translation is allowed if disclosed and if authors verify content. Generating novel scholarly arguments or results via AI is permissible only if authors verify and take responsibility for accuracy and provenance. 

— Figures & multimedia. Use of AI to generate images, figures, or multimedia requires explicit disclosure. Some journals require prior editor permission for AI-generated images or multimedia; if manipulated images could mislead, editors may reject or request source files. (See journal-specific rules; some publishers disallow unlabelled AI images.) 

6. AI use by peer reviewers and editors

— Peer reviewers. Reviewers must not upload unpublished manuscript material to third-party AI services without prior permission from authors and the editor, because that may breach confidentiality. If reviewers use AI tools to aid reviewing, they should declare this to the editor. 

— Editorial use. Editors may use AI for routine checks (formatting, similarity checks, triage) but must not rely on AI for final decisions about scientific validity. If editors use AI in decision-making or to assist with content, this usage should be governed by internal guidelines and preserve confidentiality. 

7. Plagiarism, fabricated references and hallucinations

— Prohibition and checks. Authors must ensure AI-assisted text does not contain unattributed material or fabricated references. The journal will screen submissions for plagiarism, fabricated citations, and AI hallucinatory content; discovered violations may lead to rejection, correction, or retraction. Authors must confirm that no part of the submission plagiarizes published work (including AI-sourced content). 

8. Confidentiality and privacy

— Privacy risks. Authors must not provide confidential or sensitive human data, identifiable personal information, or proprietary code to third-party AI services unless appropriate data-sharing agreements, consent, and institutional approvals are in place. Manuscripts containing restricted or identifiable data must not be processed through external AI services that may retain inputs. 

9. Enforcement, corrections, and sanctions

— Initial checks. Submissions will be reviewed at intake for required AI disclosures. Missing or incomplete disclosures will prompt editorial queries and may delay processing. 

— Misconduct. Undisclosed or deceptive use of AI that results in plagiarism, fabricated data, or breaches of confidentiality will be handled per the journal’s ethical misconduct procedures (corrections, retractions, institutional notification). COPE guidance and publisher policies will inform investigations. 

10. Periodic review

Because AI technology and community norms evolve rapidly, this policy will be reviewed and updated regularly (at minimum annually) to reflect new guidance from COPE, ICMJE, Springer Nature, Elsevierand other major publishers.

Top