AI Use in Education: Teach Academic Integrity by Design, Not Detection.

(AI Generated Image)

I have watched academic integrity policies evolve for years, and I will say this plainly. In the age of AI, trying to catch students using tools is a losing battle. I have seen detection software fail, policies confuse students, and honest learners punished for unclear rules. What works is not surveillance. What works is clarity, design, and trust.

When AI entered classrooms, many institutions reacted with fear. Ban it. Block it. Police it. But students did not stop using AI. They simply stopped talking about it. That silence is where misconduct grows. I have learned, through practice and discussion with educators, that integrity survives when we shift the focus from hiding AI use to documenting and reflecting on it.

Everything starts with clarity.

The first responsibility lies with the syllabus. If AI rules are vague, students will interpret them in their favour. If they are invisible, students will ignore them. You need to make AI use explicit and visible. State clearly what is acceptable and what crosses the line. Idea generation, refining search terms, improving language, these are legitimate supports. Submitting AI-generated analysis as original thinking is not. Ambiguity is not neutral. It creates ethical grey zones where students stumble.

Next comes disclosure. I strongly believe AI use should be declared, not denied. A short note is enough. Something as simple as, “Used AI to summarise five abstracts and rewrote the final synthesis myself.” This mirrors what journals and funding agencies are beginning to demand. Transparency normalises ethical behaviour. It also removes the fear students feel when they use tools quietly and wonder if they will be accused later.

We must also teach students what AI is for. AI is a research assistant, not a writer. I always emphasise this distinction. Show students how to use AI to generate keywords from a research question. Show them how to compare abstracts across databases. Ask AI to surface counterarguments to a draft thesis. Use it to check clarity and grammar at the final stage. These uses strengthen thinking rather than replace it. When students see AI as support, not substitution, integrity follows naturally.

Assessment design matters even more. Thinking and writing must be separated. If language quality carries most of the marks, AI will dominate. Instead, grade problem framing, source selection, and argument structure independently from expression. AI still struggles with original reasoning and contextual judgement. By valuing these elements, you protect academic integrity without banning tools outright.

Process-based assessment is another quiet but powerful shift. Ask for search logs, prompt histories, draft versions, and short reflections. Ask students where AI helped and where it failed. This changes what you assess. You stop judging only the final output and start evaluating learning itself. From my experience, students become more reflective and more honest when they know their process matters.

Citation discipline must be taught early and repeatedly. AI can fabricate references, blend sources, and paraphrase without attribution. Students often trust it blindly. They should not. Train them to verify every citation using Google Scholar or Scopus. Make verification a habit, not a warning. Once students understand how easily errors slip in, they become more cautious and responsible.

Assignment design is the final safeguard. Generic prompts invite generic AI responses. Local data, recent events, personal reflection linked to theory, or comparison of two specific papers make shallow AI output obvious. These designs do not fight AI. They outgrow it.

And then, you must say this clearly and consistently. Using AI without acknowledgement is misconduct. Using AI transparently and critically is a scholarly practice. Students understand rules when we speak plainly and apply them consistently.

The goal is simple. Students should learn how to think with AI, not outsource thinking to it. If we design teaching and assessment with this goal in mind, integrity does not weaken. It matures.

Ethical Use and Disclosure of Artificial Intelligence Tools in Academic Research Writing

Ethical Use and Disclosure of Artificial Intelligence Tools in Academic Research Writing: Evidence and Guidance from Library and Information Science

Abstract:
The use of generative artificial intelligence tools in academic research writing has become widespread across disciplines, including library and information science. While these tools are increasingly employed for drafting, language refinement, and structural assistance, disclosure practices remain inconsistent. Non-disclosure of AI use poses greater ethical and reputational risks than transparent acknowledgement. Drawing on recent published evidence from library and information science journals, this post demonstrates that ethical disclosure does not hinder publication. Further, it proposes a practical checklist to guide responsible AI use and supports the integration of AI disclosure literacy into LIS education and research practice.

Keywords:
Artificial intelligence, academic writing, research ethics, disclosure, library and information science, generative AI

Introduction:
Generative artificial intelligence tools have rapidly entered academic writing workflows. Their presence is now routine rather than exceptional. Researchers across career stages use AI-based systems to refine language, reorganise arguments, summarise notes, and support early drafting. In library and information science, a discipline grounded in information ethics and scholarly integrity, this shift raises urgent questions about responsible use and disclosure.

The central ethical challenge is not the use of AI itself, but the reluctance to acknowledge such use. A significant number of researchers employ AI tools without disclosure due to uncertainty about ethical boundaries or fear of manuscript rejection. This hesitation overlooks the greater long-term risk associated with post-publication scrutiny and potential retraction.

The Real Risk Lies After Publication:
Academic publishing has entered an era of heightened transparency and accountability. Publishers increasingly deploy detection mechanisms, reviewers are more alert to stylistic patterns associated with generative models, and post-publication review has intensified.

Retraction notices are public, permanent, and professionally damaging. They affect an author’s credibility, institutional trust, and future opportunities. In contrast, manuscript rejection is a routine academic outcome that allows revision and improvement. From both ethical and pragmatic perspectives, non-disclosure of AI use represents a higher-risk decision.

Evidence from Published Library and Information Science Research:
Concerns that disclosure leads to rejection are not supported by recent evidence. Meaningful examples from 2025 demonstrate transparent AI acknowledgement in reputable LIS publications.

Del Castillo and Kelly acknowledged the use of QuillBot for grammar, syntax, and language refinement, and Google Gemini for title formulation, in a paper published in College and Research Libraries [1].


McCrary declared the use of generative AI for initial drafting and language polishing in The Journal of Academic Librarianship, while retaining full responsibility for content accuracy and originality [2].


Islam and Guangwei reported the use of ChatGPT for data visualisation support and summary drafting in SAGE Open, explicitly accepting authorial responsibility [3].

Sebastian disclosed the use of ChatGPT-4o for drafting and refining ideas in an American Library Association publication, emphasising full human control over arguments and conclusions [4].

Aljazi acknowledged the use of ChatGPT for language refinement and summarisation in Information and Knowledge Management, in accordance with journal guidelines [5].

Beyond LIS, You et al. reported the use of generative AI for language improvement in Frontiers in Digital Health, reflecting broader acceptance of transparent disclosure across disciplines [6].

These cases share common features. AI tools are named. Tasks are clearly defined. Intellectual accountability remains with the authors. Disclosure did not prevent publication.

Ethical Use Does Not Require Avoidance: Ethical engagement with AI does not require abstention. It requires boundaries. Generative AI tools are unsuitable for disciplinary judgement, methodological reasoning, and interpretive analysis. These remain human responsibilities.

AI tools perform effectively in surface-level tasks such as grammar correction, clarity improvement, and structural suggestions. Ethical violations occur when AI is used to fabricate data, invent citations, generate unverified claims, or replace scholarly reasoning. In library and information science, where trust and attribution are foundational, such misuse directly contradicts professional values.

Disclosure as Professional Safeguard: Transparent disclosure demonstrates academic integrity, aligns with journal policies, and protects authors from allegations of misconduct. Many journals now explicitly request disclosure of AI use. Where policies are unclear, transparency remains the safer course. Silence is increasingly interpreted as concealment.

Reading and Interpreting Journal Policies: Failure to consult instructions to authors is a common cause of ethical lapses. Researchers must examine journal policies carefully, focusing on ethics statements, authorship criteria, and AI-related guidance. Key questions include permitted uses, disclosure format, and placement of acknowledgements. Policy literacy is now an essential research skill.

A Practical Ethical Checklist for Researchers:
The following checklist reflects current LIS norms and publishing expectations:

  • Conduct intellectual framing and argumentation independently
  • Use AI strictly as a support tool
  • Never use AI to invent data, results, or interpretations
  • Never allow AI to fabricate citations or references
  • Verify every reference and factual claim manually
  • Limit AI use to language clarity and structural assistance
  • Review and revise all AI-assisted text
  • Retain full responsibility for originality and accuracy
  • Read and follow journal author guidelines carefully
  • Disclose AI tools, purpose, and stage of use explicitly
  • Prefer rejection over undisclosed AI use and later retraction

Writing an Effective AI Acknowledgement:
An AI acknowledgement should be concise and factual. It should name the tool, specify the task, and indicate the stage of use. It should clearly state that the author retains responsibility for the final content. The published examples cited above [1]–[5] provide effective models.

Implications for LIS Education and Practice:

Library and information science educators and professionals play a central role in shaping ethical research behaviour. AI literacy education must extend beyond tool operation to include disclosure norms, policy interpretation, and risk awareness. Embedding these issues into research methods courses and scholarly communication training will strengthen ethical practice across the discipline.

Conclusion: Generative AI tools are now embedded in academic writing workflows. The ethical question is no longer whether researchers use them, but whether they do so transparently and responsibly. Disclosure protects scholarly credibility. Concealment exposes researchers to long-term risk.

References:

[1] M. S. Del Castillo and H. Y. Kelly, “Can AI Become an Information Literacy Ally? A Survey of Library Instructor Approaches to Teaching ChatGPT,” College & Research Libraries, vol. 86, no. 2, 2025.
Available: https://crl.acrl.org/index.php/crl/article/view/26938/34834

[2] Q. D. McCrary, “Are we ghosts in the machine? AI, agency, and the future of libraries,” The Journal of Academic Librarianship, vol. 51, no. 3, 2025.
Available: https://www.sciencedirect.com/science/article/pii/S0099133325001776

[3] M. N. Islam and H. Guangwei, “Trends and Patterns of Artificial Intelligence Research in Libraries,” SAGE Open, vol. 15, no. 1, 2025.
Available: https://journals.sagepub.com/doi/10.1177/21582440251327528

[4] J. K. Sebastian, “Reframing Information-Seeking in the Age of Generative AI,” American Library Association, 2025.
Available: https://www.ala.org/sites/default/files/2025-03/ReframingInformation-SeekingintheAgeofGenerativeAI.pdf

[5] Y. S. Aljazi, “The Role of Artificial Intelligence in Library and Information Science: Innovations, Challenges, and Future Prospects,” Information and Knowledge Management, vol. 15, no. 2, 2025.
Available: https://www.iiste.org/Journals/index.php/IKM/article/download/63557/65692

[6] C. You et al., “Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing,” Frontiers in Digital Health, vol. 7, 2025.
Available: https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1655860/full