Current AI Disclosure Frameworks: By the Publisher... For the Publisher
The rapid rise of Artificial Intelligence (AI) in writing has forced scientific publishers to create new rules for how authors should disclose AI use. While these rules aim for transparency, a close look shows they often focus more on protecting the publishers' interests than on guiding and supporting authors. This creates a confusing and sometimes unfair situation for researchers trying to use AI honestly. In contrast, frameworks like the Human-in-the-Loop-O-Meter (HILOM 7.0) aim to put the author's ethical responsibility and due diligence at the center.
Publisher Policies: Protecting Their Own Interests First
When generative AI tools like ChatGPT became popular, publishers quickly put out policies. These policies mostly agree on two big rules: AI cannot be an author, and humans are always responsible for what's published (COPE; ICMJE). But beyond these basics, their rules for how authors disclose AI use vary widely. This variety often shows what's most important to the publisher.
Protecting Reputation and Integrity: Publishers' main goal is to protect their brand and the trustworthiness of the scientific papers they publish. They worry that if AI use isn't managed, it could lead to fake data, plagiarism, or errors, which would hurt their reputation. So, their policies are often about preventing these harms.
Managing Legal Risks: Publishers also have legal concerns, especially about copyright and who owns AI-generated content. They want to make sure they aren't sued for publishing AI-created material that might infringe on someone else's work. This leads to rules that protect the publisher from legal problems.
Controlling the "AI Arms Race": Interestingly, while publishers set rules for authors using AI, they are also heavily investing in AI themselves. They license huge amounts of articles (written by authors) to train AI models, and they develop their own AI tools for checking papers and finding reviewers. This creates a "publisher's paradox": they act as both the police and a major user of AI. This situation means their policies might also be designed to help them control the new AI market in research.
The Author's Burden: Navigating Ambiguity and Stigma
Because publisher policies focus on their own needs, authors are often left in a difficult position:
Vague Rules: Many policies are unclear about when AI use needs to be disclosed. For example, some say AI can "improve language" but not "generate core content," but the line between these two is blurry. This leaves authors guessing what's allowed and what's not.
"Declaration Dilemma": Publishers often just ask for a simple statement like "AI was used." But this doesn't explain how AI was used or if the human author checked it carefully. This puts the burden of ethical judgment entirely on the author, without giving them clear guidance.
Fear of Stigma: Authors worry that disclosing AI use might make their work seem less original or less valuable. This fear can lead authors to hide their AI use, which works against the goal of transparency.
HILOM: Putting the Author First
In contrast to publisher-centric policies, the HILOM 7.0 framework is designed to empower authors. Its main goal is to help authors meet their own ethical standards and show their commitment to honesty.
Author-Driven Due Diligence: HILOM acts as a personal checklist for authors. It guides them through carefully thinking about their AI use across seven different areas (like where ideas came from, how much AI wrote, and how much the human edited and checked it). This helps authors understand their own process deeply.
Trust-Based System: HILOM is built on trust. It assumes authors will be honest about their AI use because no one can truly "get inside the mind of another" to perfectly check their creative process. Authors self-report their HILOM score, and readers are asked to trust their word.
Beyond Simple Disclosure: HILOM goes much further than just saying "AI was used." It helps authors create a detailed story of their human-AI collaboration, explaining how and why AI was used, and highlighting the human's unique intellectual work. This is called "Radical Disclosure".
Other Disclosure Models: A Growing Field
The need for better AI disclosure is widely recognized, and HILOM is part of a growing trend. Other models also aim for more detailed transparency:
Artificial Intelligence Disclosure (AID) Framework: Inspired by CRediT (which lists human author contributions), AID proposes a structured checklist for AI's specific roles in research (Weaver, 2023).
IBM AI Attribution Toolkit: Provides tools and guidelines for attributing AI's role in content creation, focusing on transparency and accountability.
Journal-Specific Policies: Many journals have their own detailed rules, like JAMA's demand for specific prompts used with AI.
While these models are valuable, HILOM 7.0 stands out by combining a multi-dimensional approach with a strong focus on the author's internal ethical process and the philosophy of trust. It aims to give authors a practical way to be truly transparent and responsible in the AI age.
References:
Authors Guild. (2024). AI Best Practices for Authors. Retrieved from
COPE (Committee on Publication Ethics). Authorship and AI tools.
ICMJE (International Committee of Medical Journal Editors). (2023). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals.
IBM. AI Attribution Toolkit.
JAMA Network Instructions for Authors: Reporting the Use of Artificial Intelligence, Language Models, Large Language Models, or Similar Technologies.
PLOS. (2023). Generative AI and AI-assisted technologies in PLOS submissions. Retrieved from https://plos.org/resource/generative-ai-policy/
Weaver, K. D. (2023). The Artificial Intelligence Disclosure (AID) Framework. Scholarly Publishing Collective.

