AI Use in Education: Teach Academic Integrity by Design, Not Detection.

(AI Generated Image)

I have watched academic integrity policies evolve for years, and I will say this plainly. In the age of AI, trying to catch students using tools is a losing battle. I have seen detection software fail, policies confuse students, and honest learners punished for unclear rules. What works is not surveillance. What works is clarity, design, and trust.

When AI entered classrooms, many institutions reacted with fear. Ban it. Block it. Police it. But students did not stop using AI. They simply stopped talking about it. That silence is where misconduct grows. I have learned, through practice and discussion with educators, that integrity survives when we shift the focus from hiding AI use to documenting and reflecting on it.

Everything starts with clarity.

The first responsibility lies with the syllabus. If AI rules are vague, students will interpret them in their favour. If they are invisible, students will ignore them. You need to make AI use explicit and visible. State clearly what is acceptable and what crosses the line. Idea generation, refining search terms, improving language, these are legitimate supports. Submitting AI-generated analysis as original thinking is not. Ambiguity is not neutral. It creates ethical grey zones where students stumble.

Next comes disclosure. I strongly believe AI use should be declared, not denied. A short note is enough. Something as simple as, “Used AI to summarise five abstracts and rewrote the final synthesis myself.” This mirrors what journals and funding agencies are beginning to demand. Transparency normalises ethical behaviour. It also removes the fear students feel when they use tools quietly and wonder if they will be accused later.

We must also teach students what AI is for. AI is a research assistant, not a writer. I always emphasise this distinction. Show students how to use AI to generate keywords from a research question. Show them how to compare abstracts across databases. Ask AI to surface counterarguments to a draft thesis. Use it to check clarity and grammar at the final stage. These uses strengthen thinking rather than replace it. When students see AI as support, not substitution, integrity follows naturally.

Assessment design matters even more. Thinking and writing must be separated. If language quality carries most of the marks, AI will dominate. Instead, grade problem framing, source selection, and argument structure independently from expression. AI still struggles with original reasoning and contextual judgement. By valuing these elements, you protect academic integrity without banning tools outright.

Process-based assessment is another quiet but powerful shift. Ask for search logs, prompt histories, draft versions, and short reflections. Ask students where AI helped and where it failed. This changes what you assess. You stop judging only the final output and start evaluating learning itself. From my experience, students become more reflective and more honest when they know their process matters.

Citation discipline must be taught early and repeatedly. AI can fabricate references, blend sources, and paraphrase without attribution. Students often trust it blindly. They should not. Train them to verify every citation using Google Scholar or Scopus. Make verification a habit, not a warning. Once students understand how easily errors slip in, they become more cautious and responsible.

Assignment design is the final safeguard. Generic prompts invite generic AI responses. Local data, recent events, personal reflection linked to theory, or comparison of two specific papers make shallow AI output obvious. These designs do not fight AI. They outgrow it.

And then, you must say this clearly and consistently. Using AI without acknowledgement is misconduct. Using AI transparently and critically is a scholarly practice. Students understand rules when we speak plainly and apply them consistently.

The goal is simple. Students should learn how to think with AI, not outsource thinking to it. If we design teaching and assessment with this goal in mind, integrity does not weaken. It matures.

Ethical Use and Disclosure of Artificial Intelligence Tools in Academic Research Writing

Ethical Use and Disclosure of Artificial Intelligence Tools in Academic Research Writing: Evidence and Guidance from Library and Information Science

Abstract:
The use of generative artificial intelligence tools in academic research writing has become widespread across disciplines, including library and information science. While these tools are increasingly employed for drafting, language refinement, and structural assistance, disclosure practices remain inconsistent. Non-disclosure of AI use poses greater ethical and reputational risks than transparent acknowledgement. Drawing on recent published evidence from library and information science journals, this post demonstrates that ethical disclosure does not hinder publication. Further, it proposes a practical checklist to guide responsible AI use and supports the integration of AI disclosure literacy into LIS education and research practice.

Keywords:
Artificial intelligence, academic writing, research ethics, disclosure, library and information science, generative AI

Introduction:
Generative artificial intelligence tools have rapidly entered academic writing workflows. Their presence is now routine rather than exceptional. Researchers across career stages use AI-based systems to refine language, reorganise arguments, summarise notes, and support early drafting. In library and information science, a discipline grounded in information ethics and scholarly integrity, this shift raises urgent questions about responsible use and disclosure.

The central ethical challenge is not the use of AI itself, but the reluctance to acknowledge such use. A significant number of researchers employ AI tools without disclosure due to uncertainty about ethical boundaries or fear of manuscript rejection. This hesitation overlooks the greater long-term risk associated with post-publication scrutiny and potential retraction.

The Real Risk Lies After Publication:
Academic publishing has entered an era of heightened transparency and accountability. Publishers increasingly deploy detection mechanisms, reviewers are more alert to stylistic patterns associated with generative models, and post-publication review has intensified.

Retraction notices are public, permanent, and professionally damaging. They affect an author’s credibility, institutional trust, and future opportunities. In contrast, manuscript rejection is a routine academic outcome that allows revision and improvement. From both ethical and pragmatic perspectives, non-disclosure of AI use represents a higher-risk decision.

Evidence from Published Library and Information Science Research:
Concerns that disclosure leads to rejection are not supported by recent evidence. Meaningful examples from 2025 demonstrate transparent AI acknowledgement in reputable LIS publications.

Del Castillo and Kelly acknowledged the use of QuillBot for grammar, syntax, and language refinement, and Google Gemini for title formulation, in a paper published in College and Research Libraries [1].


McCrary declared the use of generative AI for initial drafting and language polishing in The Journal of Academic Librarianship, while retaining full responsibility for content accuracy and originality [2].


Islam and Guangwei reported the use of ChatGPT for data visualisation support and summary drafting in SAGE Open, explicitly accepting authorial responsibility [3].

Sebastian disclosed the use of ChatGPT-4o for drafting and refining ideas in an American Library Association publication, emphasising full human control over arguments and conclusions [4].

Aljazi acknowledged the use of ChatGPT for language refinement and summarisation in Information and Knowledge Management, in accordance with journal guidelines [5].

Beyond LIS, You et al. reported the use of generative AI for language improvement in Frontiers in Digital Health, reflecting broader acceptance of transparent disclosure across disciplines [6].

These cases share common features. AI tools are named. Tasks are clearly defined. Intellectual accountability remains with the authors. Disclosure did not prevent publication.

Ethical Use Does Not Require Avoidance: Ethical engagement with AI does not require abstention. It requires boundaries. Generative AI tools are unsuitable for disciplinary judgement, methodological reasoning, and interpretive analysis. These remain human responsibilities.

AI tools perform effectively in surface-level tasks such as grammar correction, clarity improvement, and structural suggestions. Ethical violations occur when AI is used to fabricate data, invent citations, generate unverified claims, or replace scholarly reasoning. In library and information science, where trust and attribution are foundational, such misuse directly contradicts professional values.

Disclosure as Professional Safeguard: Transparent disclosure demonstrates academic integrity, aligns with journal policies, and protects authors from allegations of misconduct. Many journals now explicitly request disclosure of AI use. Where policies are unclear, transparency remains the safer course. Silence is increasingly interpreted as concealment.

Reading and Interpreting Journal Policies: Failure to consult instructions to authors is a common cause of ethical lapses. Researchers must examine journal policies carefully, focusing on ethics statements, authorship criteria, and AI-related guidance. Key questions include permitted uses, disclosure format, and placement of acknowledgements. Policy literacy is now an essential research skill.

A Practical Ethical Checklist for Researchers:
The following checklist reflects current LIS norms and publishing expectations:

  • Conduct intellectual framing and argumentation independently
  • Use AI strictly as a support tool
  • Never use AI to invent data, results, or interpretations
  • Never allow AI to fabricate citations or references
  • Verify every reference and factual claim manually
  • Limit AI use to language clarity and structural assistance
  • Review and revise all AI-assisted text
  • Retain full responsibility for originality and accuracy
  • Read and follow journal author guidelines carefully
  • Disclose AI tools, purpose, and stage of use explicitly
  • Prefer rejection over undisclosed AI use and later retraction

Writing an Effective AI Acknowledgement:
An AI acknowledgement should be concise and factual. It should name the tool, specify the task, and indicate the stage of use. It should clearly state that the author retains responsibility for the final content. The published examples cited above [1]–[5] provide effective models.

Implications for LIS Education and Practice:

Library and information science educators and professionals play a central role in shaping ethical research behaviour. AI literacy education must extend beyond tool operation to include disclosure norms, policy interpretation, and risk awareness. Embedding these issues into research methods courses and scholarly communication training will strengthen ethical practice across the discipline.

Conclusion: Generative AI tools are now embedded in academic writing workflows. The ethical question is no longer whether researchers use them, but whether they do so transparently and responsibly. Disclosure protects scholarly credibility. Concealment exposes researchers to long-term risk.

References:

[1] M. S. Del Castillo and H. Y. Kelly, “Can AI Become an Information Literacy Ally? A Survey of Library Instructor Approaches to Teaching ChatGPT,” College & Research Libraries, vol. 86, no. 2, 2025.
Available: https://crl.acrl.org/index.php/crl/article/view/26938/34834

[2] Q. D. McCrary, “Are we ghosts in the machine? AI, agency, and the future of libraries,” The Journal of Academic Librarianship, vol. 51, no. 3, 2025.
Available: https://www.sciencedirect.com/science/article/pii/S0099133325001776

[3] M. N. Islam and H. Guangwei, “Trends and Patterns of Artificial Intelligence Research in Libraries,” SAGE Open, vol. 15, no. 1, 2025.
Available: https://journals.sagepub.com/doi/10.1177/21582440251327528

[4] J. K. Sebastian, “Reframing Information-Seeking in the Age of Generative AI,” American Library Association, 2025.
Available: https://www.ala.org/sites/default/files/2025-03/ReframingInformation-SeekingintheAgeofGenerativeAI.pdf

[5] Y. S. Aljazi, “The Role of Artificial Intelligence in Library and Information Science: Innovations, Challenges, and Future Prospects,” Information and Knowledge Management, vol. 15, no. 2, 2025.
Available: https://www.iiste.org/Journals/index.php/IKM/article/download/63557/65692

[6] C. You et al., “Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing,” Frontiers in Digital Health, vol. 7, 2025.
Available: https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1655860/full

Why AI Needs Librarians More Than Ever

One fine morning, a young lady called me. Her voice carried anxiety. I assumed it was the usual exam pressure. I told her honestly that I was not a psychologist and might not help her deal with anxiety. She stopped me immediately.

“No sir,” she said. “It is not about the exam.”

She told me she was in her final year of Library and Information Science. She had been watching my videos where I explain how artificial intelligence makes a researcher’s life easier. Faster discovery. Instant summaries. Easy access to information. Then she asked a question that stayed with me long after the call ended.

“If researchers get information so easily,” she asked, “who will come to libraries? And if nobody comes, who will hire librarians like me?”

That single question captures the fear many students, early-career professionals, and even senior librarians are quietly carrying today. It deserves a clear, practical answer.

To understand this properly, we need to slow down and separate hype from reality.

Artificial intelligence is powerful. It can summarise books, answer questions, and even draft research papers. It speaks confidently. That confidence often misleads us into believing the answers are correct. This brings us to the first and most important lesson.

AI gives answers.
It does not give judgement.

An AI system does not know whether information is biased, incomplete, outdated, or ethically problematic. It predicts text based on patterns in data. If the output sounds fluent, the system considers its job done. Truth, context, and consequence are not part of its thinking.

Judgement still belongs to humans. And judgement has always been at the core of librarianship.

This leads to the second lesson, one many people underestimate.

Traditional library skills are not outdated.
They are AI-era skills.

Take cataloguing. Many students see it as mechanical and irrelevant. In reality, cataloguing is structured thinking. It is about describing information so others can find it, understand it, and trust it. Today, AI systems depend on exactly this kind of structure.

AI models need clear documentation.
They need clean metadata.
They need transparency about data sources and limitations.

Without these, AI becomes a black box. Librarians have been preventing black boxes for decades.

The same applies to information retrieval. Long before AI existed, librarians taught users how to search effectively, refine queries, evaluate sources, and understand context. Modern AI search works well only when someone understands relevance and authority. That skill has not disappeared. It has become more valuable.

Then there is ethics.

Libraries have always stood for access, equity, privacy, and intellectual freedom. These are not optional values in the AI age. They are essential safeguards. AI systems amplify bias, exclude voices, and compromise privacy if left unchecked. Librarians already know how to question systems, not worship them.

This is why an important shift is taking place.

Librarians are no longer only users of AI.
They are becoming the human infrastructure behind AI.

They ensure systems are transparent.
They ensure systems are fair.
They ensure systems serve people, not mislead them.

This is not a future scenario. It is already happening.

A 2025 Clarivate report shows that 67 percent of libraries are already exploring or actively using AI. Libraries now operate in a research ecosystem where AI tools scan thousands of papers, extract data, generate cited answers, and map research connections visually.

These tools save time. They also confuse users. Researchers often do not know where answers come from, what was excluded, or what assumptions were made. Someone must explain this clearly.

That responsibility naturally falls on librarians.

Behind the scenes, AI is also reshaping library operations. Metadata creation, cataloguing, and collection management are increasingly automated. A system can generate records. A model can catalogue a book from an image. This does not remove librarians from the system. It removes repetitive labour.

What replaces it is higher-value work.

Advanced research support.
Teaching AI and information literacy.
Community programmes.
Policy guidance and ethical review.

Another fear needs addressing.

Many people assume AI will reduce the importance of libraries. In practice, it often expands access.

In India, mobile AI labs travel to remote villages. They do not replace libraries. They work alongside traditional village libraries. Technology moves, but trust remains local. Libraries become bridges between advanced tools and real communities.

At the same time, we must speak honestly about AI’s weaknesses.

One term everyone must understand is AI hallucination. This occurs when a system produces fluent but false information. There is no intent to deceive. Accuracy is sacrificed for smooth language.

The consequences are serious. Researchers have wasted hours chasing references that never existed, created entirely by AI. Proving that a source does not exist takes time and energy away from meaningful work. This feeds what many experts now call the slop problem, where low-quality AI content floods the internet and academic publishing. Trust erodes. Reviewers burn out. Good research gets buried.

So the practical question becomes unavoidable.

Why does AI still need librarians?

Because someone must teach critical evaluation.
Because someone must audit bias.
Because someone must protect privacy.
Because someone must identify fake citations.
Because someone must uphold intellectual freedom.

AI does not understand these responsibilities. Librarians do.

This brings us to the transformation of the profession.

The librarian is no longer a gatekeeper of information.
The librarian is a supervisor of AI systems.

The librarian is no longer only a reference desk expert.
The librarian is an AI literacy educator.

The librarian is no longer only a collection manager.
The librarian is an ethical evaluator of everyday tools.

The most accurate description of this role is information architect. Someone who designs, audits, and oversees how knowledge is created, accessed, and trusted.

This transformation requires investment. Not only in technology, but in people. The AI-ready workforce will be built, not bought. It will emerge through reskilling, confidence building, and empowering professionals who already understand information deeply.

When I think back to that anxious student, I no longer see a profession in danger. I see a profession at a turning point.

AI delivers answers faster than ever.
But society still needs someone to teach how to question those answers.

That responsibility has always belonged to librarians.

And it still does.

Click and Catalogue Books: Your AI-Powered Library Cataloguing Assistant

Artificial Intelligence is transforming every profession, and librarianship is no exception. With Custom GPTs in ChatGPT, you can now create specialized AI assistants that perform targeted professional tasks. A Custom GPT is not a generic chatbot—it’s a tuned version of ChatGPT designed with specific instructions, reference data, and workflows to carry out specialized jobs efficiently.

I’ve built one such assistant, called Click and Catalogue Books, specifically for librarians and cataloguers. It automates the complete process of book cataloguing—from classification to MARC record generation—by using the power of AI.

What Makes Click and Catalogue Books Unique

This Custom GPT replicates the intellectual process of a professional cataloguer in seconds. Here’s what it does step by step:

  • Identifies bibliographic data from photos of the Title page and Verso page.
  • Classifies the book using the Dewey Decimal Classification (DDC) system. It analyses the subject, determines the correct class number, and provides it with precision.
  • Generates a Cutter number to represent the main entry (usually the author).
  • Synthesizes the call number by combining the DDC class number and the Cutter number—an operation that typically takes a trained cataloguer several minutes. Here, AI completes it instantly.
  • Assigns subject headings based on the Sears List of Subject Headings, ensuring standardization and consistency in subject access.
  • Displays metadata in AACR II format, including author, title, edition, publication details, physical description, and subject entries.
  • Generates a complete MARC record, ready for download and direct upload to your Library OPAC.

What once took hours of manual analysis and data entry is now handled in seconds with remarkable accuracy.

Traditional cataloguing is a time-consuming process that requires specialized knowledge of AACR II, DDC, Sears List, and MARC standards. Many small or rural libraries lack trained cataloguers or cannot afford expensive automation software.

Click and Catalogue Books bridges this gap by providing:

  • Instant cataloguing from mobile devices
  • Reduced cataloguing backlog for new acquisitions
  • Accurate and standardized metadata
  • Interoperability with OPAC and library management systems
  • Support for non-technical staff in rural and small institutional setups

The GPT acts like a virtual cataloguer—fast, reliable, and accessible from anywhere.

How to Use Click and Catalogue Books

  1. Open the ChatGPT app on your mobile phone.
  2. Tap Explore GPTs and select Click and Catalogue Books or go directly to:
    https://chatgpt.com/g/g-6909a9715c808191862570c20599a968-click-and-catalogue-books
  3. Take clear photos of the Title page and Verso page of the book.
  4. Submit them to the GPT.
  5. In a few seconds, you’ll receive:
  • DDC class number
  • Cutter number
  • Synthesized call number
  • Standard subject headings (Sears)
  • AACR II metadata display
  • Complete MARC record ready for download

You can then download the MARC file and upload it directly into your Library OPAC or cataloguing module.

A Step Toward AI-Integrated Librarianship

This Custom GPT is more than a tool—it’s a practical example of how AI can assist librarians in core professional tasks. It merges cataloguing standards, bibliographic intelligence, and natural language understanding into one seamless workflow.

Click and Catalogue Books shows that cataloguing no longer has to be a slow, manual process. AI now performs hours of intellectual work in seconds, with consistency and accuracy.

AI and Libraries in October 2025: Key Developments, Impacts, and Trends

When I look back at how libraries have evolved through 2025, it feels as if artificial intelligence has quietly rewritten the script of librarianship. The year began with cautious experimentation, but by October, AI had become deeply embedded in daily operations from cataloging to digital preservation. [1]. 

Libraries today are not merely using AI; they are living with it. Academic and public institutions alike are automating repetitive workflows, freeing librarians to engage more deeply with research and pedagogy. AI chatbots now answer common queries instantly, while smart systems recommend books based on nuanced user behavior [2], [3]. Some libraries even deploy small robots to navigate aisles, performing inventory or retrieval tasks with quiet precision [2]. Behind these visible changes lies something subtler—the shift toward algorithmic decision-making in information services, where metadata creation, classification, and even preservation strategies are driven by learning models [4]. 

Of course, all this progress comes with questions. As one recent SAGE report revealed, only 7% of academic libraries are using AI tools regularly, even though over 60% are exploring adoption strategies [5]. This gap reflects both hesitation and hunger.  Frameworks like the ‘ACRL’s AI Competencies for Academic Library Workers’ [6] have become so timely. They emphasize three things , understanding the logic of AI systems, applying them ethically, and translating their potential into academic value. No wonder librarians are now seen as mediators between human inquiry and algorithmic intelligence [7]. 

What’s fascinating is how fast generative AI has entered the scholarly mainstream. Tools like ChatGPT, Gemini, and Copilot are no longer novelties—they are part of everyday academic life [1]. I’ve seen libraries incorporate AI literacy into information literacy courses, teaching students not just how to ‘use’ AI but how to ‘question’ it. The University of Michigan’s AI pedagogy model and Europe’s LIBRA.I. network exemplify this shift toward guided experimentation [8]. In parallel, the idea of ‘machine-readable scholarship’ is emerging. AI can interpret and link research dynamically [9]. 

This convergence of AI, libraries, and academia reminds me that technology alone doesn’t define progress but our response to it does. By collaborating with researchers and technologists, libraries are helping shape the ethical contours of AI use. Whether it’s enabling responsible data governance or supporting cross-disciplinary projects [10], the librarian’s role is evolving from custodian to catalyst. 

In the end, AI hasn’t replaced the librarian rather it has reimagined the profession. The library remains, as ever, a place of trust, but now it hums with the quiet intelligence of algorithms working behind the scenes. As we move toward 2026, the task is not just to deploy AI, but to ensure that it continues to serve the human spirit of curiosity and learning.  



*References* 

[1] Liblime, “How Libraries Are Leading the AI Revolution,” Oct. 2025. Available: https://liblime.com/2025/10/04/how-libraries-are-leading-the-ai-revolution/ 

[2] IJSAT, “Adoption of Artificial Intelligence in Academic Libraries in African Universities: A Scoping Review,” Sep. 2025. Available: https://www.ijsat.org/research-paper.php?id=8003 

[3] JST, “Survey to Measure the Effectiveness of Utilizing Artificial Intelligence and Data Analysis in Improving Knowledge Management in Omani Information Institutions and Libraries,” Apr. 2025. Available: https://journals.ust.edu/index.php/JST/article/view/2822 

[4] JKG, “Digital Preservation Strategies in Academic Libraries: Ensuring Long-Term Access to Scholarly Resources,” Apr. 2025. Available: https://jkg.ub.ac.id/index.php/jkg/article/view/31 

[5] SAGE Publishing, “New Technology from Sage Report Explores Librarian Leadership in the Age of AI,” May 2025. Available: https://www.sagepub.com/explore-our-content/press-office/press-releases/2025/05/20/new-technology-from-sage-report-explores-librarian-leadership-in-the-age-of-ai 

[6] ALA, “2025-03-05 Draft: AI Competencies for Academic Library Workers,” Mar. 2025. Available: https://www.ala.org/sites/default/files/2025-03/AI_Competencies_Draft.pdf 

[7] SAGE Journals, “Exploring the utilization of generative AI by librarians in higher education across the Gulf Cooperation Council (GCC) countries: Trends in adoption, innovative applications, and emerging challenges,” Oct. 2025. Available: https://journals.sagepub.com/doi/10.1177/09610006251372630 

[8] Emerald Publishing, “Technology trends for libraries in the AI era,” 2025. Available: https://www.emerald.com/lhtn/article/42/2/6/1268684/Technology-trends-for-libraries-in-the-AI-era 

[9] Hybridhorizons, “How AI Will Transform Libraries & Librarianship 2025-2035?,” Mar. 2025. Available: https://hybridhorizons.substack.com/p/how-ai-will-transform-libraries-and 

[10] WebJunction, “What’s on the horizon for AI and public libraries?,” Oct. 2025. Available: https://www.webjunction.org/news/webjunction/public-libraries-ai-future.html

How Libraries Are Quietly Redefining AI

Over the last few months, I’ve been watching with curiosity how libraries are quietly—but decisively—reshaping their relationship with artificial intelligence. It’s no longer just about adopting a new tool; it’s about redefining our professional DNA.

It began, perhaps fittingly, with the books themselves. Leading libraries—Harvard, Boston Public, and even Oxford’s Bodleian—have started opening up massive digitized collections for AI training [1]. These aren’t copyrighted bestsellers but millions of public domain works spanning hundreds of languages. The idea is simple yet profound: let AI learn from humanity’s collective memory, curated and preserved by libraries. This act of sharing feels like librarianship at its noblest—quietly empowering innovation while protecting cultural integrity. And yet, there’s always a thin line between use and misuse; once data leaves the stacks, who ensures its ethical handling?

At the same time, a strange irony has surfaced. Librarians are now being asked to find AI-hallucinated books—titles that exist only in the imagination of chatbots [2]. It’s almost poetic: AI depends on libraries for truth, yet it also invents illusions that send people back to those very libraries for verification. Many of my colleagues describe it as part detective work, part myth-busting. No wonder librarianship today demands as much digital literacy as human empathy.

Meanwhile, in the name of efficiency and inclusivity, many libraries are turning to automated diversity audit tools to evaluate their collections [3]. But new research warns that these systems can flatten identities and miss local nuances. It’s a reminder that algorithms, no matter how elegant, cannot replace community understanding. By the way, I find this debate refreshing—it forces us to revisit what “representation” truly means beyond checkboxes and metadata.

Encouragingly, the profession isn’t shying away from these complexities. Across institutions, librarians are enrolling in AI literacy programs, attending workshops, and even taking up newly created roles such as Director of AI or AI Librarian [4]. I find this deeply symbolic: librarians stepping out of the reactive corner into leadership positions. From Stony Brook to San José, they’re proving that AI is not an external force to be feared but a field to be shaped—ethically, critically, and confidently.

All of this feeds into a growing scholarly and professional conversation about aligning AI with the library’s enduring mission—access, equity, and trust. New frameworks from IFLA, Frontiers, and several universities emphasize that libraries must be partners in AI development, not mere users [5]. The message is clear: technology must bend toward human values, not the other way around.

So yes, the AI wave has reached the library world—but it’s not a tidal surge of disruption. It’s more like a steady current of reinvention. From digitized archives feeding neural networks to librarians decoding machine-made myths, the profession is finding its rhythm again.

And as I see it, this is the dawn of a new librarianship—one that reads, writes, and reasons alongside the machines, but always, always in service of humanity.

Hello AI World!!!

#AIinLibraries #Librarianship #DigitalTransformation #ArtificialIntelligence #LibraryInnovation #EthicalAI

References:
[1] https://apnews.com/article/e096a81a4fceb2951f232a33ac767f53
[2] https://www.404media.co/librarians-are-being-asked-to-find-ai-hallucinated-books/
[3] https://arxiv.org/abs/2505.14890
[4] https://about.proquest.com/en/blog/2025/bridging-the-ai-skills-gap-a-new-literacy-program-for-academic-libraries/
[5] https://www.ifla.org/news/just-published-new-horizons-in-artificial-intelligence-in-libraries/

Generative AI in Academic Libraries: Ethical, Pedagogical, Labour, and Equity Challenges

Generative Artificial Intelligence (AI) has emerged as a disruptive technology with transformative potential for academic libraries. The *Library Trends* two-part series (Vol. 73, Issues 3 & 4, 2025) provides a foundational exploration of AI’s impact on libraries from multiple perspectives, including ethics, pedagogy, labour, and decolonial approaches.

Ethical Challenges and Bias in Generative AI

Generative AI systems pose significant ethical challenges that academic libraries must navigate carefully. One key concern is algorithmic bias, where AI models trained on historical data amplify existing societal inequities, leading to unfair or inaccurate information retrieval outcomes. A 2025 scoping review by Igbinovia highlights how AI biases affect Information Retrieval Systems (IRS) and calls upon LIS professionals to engage in ethical data curation, algorithmic auditing, and policy advocacy to mitigate harm [1].

Beyond bias, reliable and trustworthy output remains a challenge. Generative AI is prone to “hallucinations,” producing factually incorrect or fabricated information, which can impair academic integrity [2]. Georgetown University’s guidance emphasises that AI-generated text must be critically evaluated and transparently attributed to avoid plagiarism and misinformation [3].

Ethical AI practice mandates human accountability, transparency, data privacy, and fairness [2][4]. Stahl et al. (2022) link these principles to European regulations, emphasising protection of fundamental rights in AI governance [5]. Researchers advocate integrating moral values into AI systems through frameworks such as utilitarianism, deontology, virtue ethics, and care ethics to promote equitable AI designs [6]. Virtue ethics, in particular, offers nuanced guidance focusing on moral character in decision-making, echoing the calls in *Library Trends* for character-based ethical frameworks around AI use [7][5].

AI Literacy: Skills and Pedagogy in Academic Libraries

Effective AI literacy emerges as a critical response to ethical and practical challenges. Leo S. Lo’s framework for AI literacy in academic libraries underscores the need for broad technical knowledge, ethical awareness, critical thinking, and practical skills to empower users and librarians alike [8]. The widespread recognition of AI’s impact has driven many academic libraries to develop literacy programs; Clarivate and ACRL Choice launched a free eight-week micro-course on AI literacy essentials addressing this urgent need [9].

Studies consistently reveal gaps in preparedness among LIS professionals to teach AI literacy, with softer ethical competencies sometimes outperforming harder technical skills [10]. Pedagogical research stresses incorporating critical information literacy, enabling users to evaluate biases and misinformation in AI-generated content [7][11]. Workshop case studies demonstrate successful models for teaching responsible AI use grounded in theoretical frameworks such as post-phenomenology and critical pedagogy [12].

Impacts on Library Labour and Professional Practice

Generative AI is reshaping library workflows and professional roles, presenting both opportunities and disruptions. Research shows growing adoption of AI tools to improve productivity in cataloguing, classification, reference, and research services [13]. However, concerns persist about job displacement, skill obsolescence, and ethical use of automations [7][14].

Luo’s survey highlights varied librarian experiences using AI in daily tasks, emphasising the need for ongoing training and support [14]. The impact of labour extends to how libraries organise instruction and reference service labour—areas analysed in *Library Trends* through the lens of material conditions of instruction and professional identity shifts [7]. Scholars call for thoughtful policy development to balance AI efficiency gains with humane labour practices that preserve professional autonomy [15].

Addressing Algorithmic Bias in Information Retrieval

Algorithmic bias is widely acknowledged as a serious risk in library AI applications. Workshops like the BIAS 2025 at SIGIR concentrate on developing strategies for fairer search and recommendation systems [16]. These initiatives complement academic calls for algorithmic audits and inclusion of diverse datasets to improve AI fairness and transparency [1]. LIS professionals’ role is pivotal in advocating for ethical AI in information retrieval, ensuring algorithms do not perpetuate discriminatory outcomes. Training in algorithmic literacy allows librarians to audit AI tools critically and promote equitable access to information [1]

Decolonial and Equity-Oriented AI Perspectives

Decolonial approaches to AI demand centring Indigenous knowledge systems and challenging Western epistemologies embedded in AI designs. Works like those by Cox and Jimenez in *Library Trends* highlight the necessity of decolonising digital libraries through ethical AI frameworks [7]. Such perspectives align with broader global calls to recognise AI’s sociocultural impacts and counteract systemic biases [7].

These approaches intersect with data privacy and user equity concerns, emphasising transparency, inclusiveness, and community engagement as core principles for responsible AI governance in libraries [17].

Future Directions and Recommendations

  • The converging research points to several actionable recommendations for academic libraries integrating generative AI:
  • Develop comprehensive AI literacy programs_ that include ethics, critical thinking, and technical training for librarians and patrons [8][9].
  • Engage in ongoing algorithmic auditing and bias mitigation efforts, leveraging multi-disciplinary partnerships to ensure fair and transparent systems [1][16].
  • Adopt ethical frameworks, including virtue ethics, to guide AI policy, design, and usage decisions, emphasising accountability and human flourishing [7][5][6].
  • Support library labour through upskilling and redefining roles to optimise human-AI collaboration rather than simple automation-driven displacement [7][14].
  • Incorporate decolonial methodologies in AI development and deployment to elevate marginalised perspectives and knowledge systems [7].
  • Maintain vigilant attention to data privacy and user consent within AI systems, upholding trust and ethical standards [2].

Selected References

  • 1. Igbinovia, M. O. (2025). Artificial intelligence algorithm bias in information retrieval: Implications for LIS professionals. Journal of Information Science, 51(4). https://doi.org/10.1080/07317131.2025.2512282
  • 2. Dilmegani, C., & Ermut, S. (2025). Generative AI Ethics: Concerns and How to Manage Them? AI Multiple. https://research.aimultiple.com/generative-ai-ethics/
  • 3. Lo, L. S. (2025). AI Literacy: A Guide for Academic Libraries. College & Research Libraries News, 86(3). https://digitalrepository.unm.edu/ulls_fsp/210/
  • 4. Georgetown University Libraries. (2023). Ethics & AI. https://guides.library.georgetown.edu/ai/ethics
  • 5. Gmiterek, G. (2025). Generative artificial intelligence in the activities of librarians. Journal of Academic Librarianship. https://www.sciencedirect.com/science/article/abs/pii/S0099133325000394
  • 6. Mwantimwa, K. (2025). Application of generative artificial intelligence in library operations. Library Hi Tech News. https://www.tandfonline.com/doi/full/10.1080/07317131.2025.2467574
  • 7. Stahl, B. C., et al. (2022). AI ethics and governance in Europe. Ethics and Information Technology, 24(1). https://link.springer.com/article/10.1007/s10676-021-09598-z
  • 8. Education and Library Trends on AI, 2025. Library Trends Vol. 73(3) & (4). https://ischool.illinois.edu/news-events/news/2025/09/library-trends-completes-two-part-series-ai-and-libraries
  • 9. BIAS Workshop @ SIGIR 2025. (2025). International Workshop on Algorithmic Bias. https://biasinrecsys.github.io/sigir2025/

Sources:

  • [1] Artificial intelligence algorithm bias in information retrieval: Implications for LIS professionals.. https://www.tandfonline.com/doi/full/10.1080/07317131.2025.2512282
  • [2] Generative AI Ethics: Concerns and How to Manage Them? https://research.aimultiple.com/generative-ai-ethics/
  • [3] Ethics & AI – Artificial Intelligence (Generative) Resources https://guides.library.georgetown.edu/ai/ethics
  • [4] AI Ethical Guidelines. https://library.educause.edu/resources/2025/6/ai-ethical-guidelines
  • [5] Philosophy and Ethics in the Age of Artificial Intelligence https://jisem-journal.com/index.php/journal/article/download/9232/4266/15377
  • [6] Integrating Moral Values in AI: Addressing Ethical … https://journals.mmupress.com/index.php/jiwe/article/view/1255
  • [7] Library Trends completes two-part series on AI and libraries https://ischool.illinois.edu/news-events/news/2025/09/library-trends-completes-two-part-series-ai-and-libraries
  • [8] AI Literacy: A Guide for Academic Libraries by Leo S. Lo https://digitalrepository.unm.edu/ulls_fsp/210/
  • [9] Bridging the AI skills gap: Literacy program academic … https://about.proquest.com/en/blog/2025/bridging-the-ai-skills-gap-a-new-literacy-program-for-academic-libraries/
  • [10] AILIS 1.0: A new framework to measure AI literacy in library … AILIS 1.0: A new framework to measure AI literacy in library and information science (LIS) https://www.sciencedirect.com/science/article/abs/pii/S0099133325001144
  • [11] Information Literacy for Generative AI https://edtechbooks.org/ai_in_education/information_literacy_for_generative_ai?tab=images
  • [12] Fostering AI Literacy in Undergraduates: A ChatGPT Workshop Case Study https://digitalcommons.lmu.edu/cgi/viewcontent.cgi?article=1178&context=librarian_pubs
  • [13] Application of generative artificial intelligence in library operations and service delivery: A scoping review. https://www.tandfonline.com/doi/full/10.1080/07317131.2025.2467574
  • [14] Library Trends examines generative AI in libraries http://ischool.illinois.edu/news-events/news/2025/06/library-trends-examines-generative-ai-libraries
  • [15] Leo Lo – libraries #generativeai #openaccess #innovation https://www.linkedin.com/posts/leoslo_libraries-generativeai-openaccess-activity-7269345269811408896-jWcM
  • [16] International Workshop on Algorithmic Bias in Search and Recommendation (BIAS 2025) https://dl.acm.org/doi/10.1145/3726302.3730357
  • [17] Exploring the integration of artificial intelligence in libraries https://ijlsit.org/archive/volume/9/issue/1/article/3116
  • [18] Generative artificial intelligence in the activities of academic libraries of public universities in Poland. https://www.sciencedirect.com/science/article/abs/pii/S0099133325000394
  • [19] Practical Considerations for Adopting Generative AI Tools in Academic Libraries https://www.tandfonline.com/doi/full/10.1080/01930826.2025.2506151?src=exp-la
  • [20] The transformative potential of Generative AI in academic library access services: Opportunities and challenges. https://journals.sagepub.com/doi/10.1177/18758789251332800
  • [21] How National Libraries Are Embracing AI for Digital Transformation. https://librarin.eu/how-national-libraries-are-embracing-ai-for-digital-transformation/
  • [22] International Workshop on Algorithmic Bias in Search and Recommendation https://biasinrecsys.github.io/sigir2025/
  • [23] Generative Artificial Intelligence and Its Implications … https://www.rfppl.co.in/subscription/upload_pdf/single-pdf(19-25)-1746421080.pdf
  • [24] Investigating the “Feeling Rules” of Generative AI and Imagining Alternative Futures.  https://www.inthelibrarywiththeleadpipe.org/2025/ai-feeling-rules/

Bridging Stacks and Circuits: Rethinking Library Science Curriculum for the AI Era

When I imagine redesigning the Library and Information Science curriculum for the age of AI, I see it semester by semester, like walking through the library stacks, each level taking me closer to new knowledge, but always with a familiar fragrance of books and values.

Semester 1 – The Roots
Here I would begin with Foundations of Library Science, Information Sources & Services, and alongside them introduce Introduction to AI and Data Literacy. Students should learn what algorithms are, how language models work, and why data matters. Just remember, this is not to turn them into computer scientists, but into informed professionals who can converse with both technology and community.

Semester 2 – The Tools
This stage could focus on Knowledge Organization, Cataloguing and Metadata, but reframed to show how AI assists in subject indexing, semantic search, and linked data. Alongside, a course on Digital Libraries and Discovery Systems will let them experiment with AI-powered platforms. By the way, assignments could include building small datasets and watching how AI classifies them — both the brilliance and the flaws.

Semester 3 – The Questions
Here ethics must enter the room strongly. A full course on AI, Ethics, and Information Policy is essential: patron privacy, copyright, algorithmic bias, transparency. At the same time, practical subjects like Digital Curation and Preservation should demonstrate how AI restores manuscripts, enhances images, or predicts file degradation. No wonder, students will begin to see AI as both a tool and a responsibility.

Semester 4 – The Bridge
I see this as a turning point: courses on Human–AI Interaction in Libraries, Information Literacy Instruction in the AI Era, and Data Visualization for Librarians. Students would learn to teach communities about AI tools, to verify machine answers, and to advocate for responsible use. A lab-based course could even simulate AI chatbots for reference desks, showing how humans must stay in the loop.

Semester 5 – The Expansion
By now, students are ready for deeper exploration. They could take electives like AI in Scholarly Communication (covering plagiarism detection, trend forecasting, citation networks) or AI for Community Engagement (local language NLP, accessibility, inclusive design). At the same time, collaboration with computer science or digital humanities departments could be formalized as joint workshops.

Semester 6 – The Future
The final stage should be open-ended: a Capstone Project in AI and Libraries, where each student selects a challenge — say, AI in cataloguing, or a chatbot for local history archives — and builds a small prototype or research study. Supplement this with an Internship or Residency in a library, tech lab, or cultural institution. Just imagine the confidence this gives: they graduate not as passive observers of AI but as active participants in shaping it.

And beyond…
I must not forget lifelong learning. The curriculum should be porous, allowing micro-credentials, short courses, and professional updates, because AI won’t stop evolving. In fact, it will keep testing us — and so our readiness must be continuous.

Looking back at this imagined curriculum, I feel it keeps the spirit of librarianship alive — service, access, ethics — while opening the doors to AI-driven realities. It is like adding a new wing to the old library: modern, glowing, full of machines perhaps, but still part of the same house of knowledge where the librarian remains a human guide.

From Stacks to Algorithms: Librarianship’s New Chapter

I remember when the word artificial intelligence first started appearing in library journals, it felt distant, almost experimental, as if it belonged more to labs than to our reading rooms. But just yesterday I came across a note from the American Library Association — they have now published guidance for school librarians on how to use AI in their everyday work [1]. No wonder, because librarians today are juggling so many roles: teachers, mentors, administrators, sometimes even technologists. The ALA’s advice is not about replacing them, but about helping them — streamlining tasks, improving communication, and yes, teaching students how to use AI ethically (plagiarism, citations, authorship, all those tricky parts).

By the way, it is not just policy notes. At Illinois, the journal Library Trends has just completed a two-part special issue on generative AI and libraries [2]. I skimmed through some of the abstracts: studies on how students use ChatGPT, how faculty perceive these tools, case studies of AI literacy instruction. This is serious scholarship, freely available, meant to guide practice. It reminds me of my early days in the profession, when such research gave us the language to argue for budgets and staff — and sometimes, just the courage to try new things.

And then, in Prague, librarians and researchers gathered under the banner of an “AI Knowledge Café,” more than 650 participants thinking together about the place of libraries in national AI strategies [3]. Imagine that: librarians not just adopting AI tools quietly, but sitting at the policy table, influencing how society will treat knowledge, ethics, and inclusion in the age of algorithms.

When I read all this, I feel both hopeful and cautious. Hopeful, because libraries are no longer seen as passive — we are active shapers of how AI unfolds. Cautious, because guidance and journals and cafés will mean little without real resources, training, and recognition, especially in countries like ours where libraries carry such a heavy heritage burden across many languages.

Still, I like to think that this is the beginning of a new chapter. Librarianship in the AI age is not a threat to our role, but a chance to re-articulate it. And in my heart, I feel grateful to be part of this transition — from catalog cards to chatbots, from dusty stacks to digital literacy.


References

[1] American Library Association. AI Guidance for School Librarians. Published September 2025. https://www.ala.org/news/2025/09/ai-guidance-school-librarians

[2] iSchool at Illinois. Library Trends Completes Two-Part Series on AI and Libraries. Published September 2025. https://ischool.illinois.edu/news-events/news/2025/09/library-trends-completes-two-part-series-ai-and-libraries

[3] ALA / IFLA. Libraries Towards a National AI Strategy (AI Knowledge Café). September 2025. https://connect.ala.org/acrl/discussion/libraries-towards-a-national-ai-strategy

Why India Needs Libraries at the Heart of Its National AI Strategy

Artificial Intelligence (AI) is rapidly reshaping how societies learn, work, and connect. As India builds its national AI strategy, there is an urgent need to ask: who will ensure that AI development remains ethical, inclusive, and accessible to every citizen? One powerful answer lies in our libraries.

Think about it. For decades, libraries have been safe spaces where anyone, regardless of background, could walk in and learn. Whether it was a student preparing for exams, a farmer checking market information, or a job seeker updating their resume, libraries have been bridges to opportunity. In the age of AI, they can once again be the guiding hand that helps people navigate complexity and change.

  • Guardians of Ethics and Accountability
    Libraries can champion transparency, fairness, and human oversight in AI systems adopted by public institutions.
  • Protectors of Privacy and Intellectual Freedom
    Library principles of confidentiality and equitable access align perfectly with India’s need for citizen-centric AI governance.
  • AI and Digital Literacy Hubs
    Just as libraries once taught computer literacy, they can now lead community workshops, training, and resources on AI literacy.
  • Upskilling the Workforce
    Librarians must be trained to use AI in cataloguing, research support, and community services—ensuring the profession adapts and thrives.
  • Bridging the Digital Divide
    Rural and underserved communities can access AI tools through public libraries, preventing exclusion from India’s digital transformation.
  • Policy Participation
    Libraries should have a seat at the table in national AI governance—bringing the voices of ordinary citizens into policy-making.

A Call to Action for Librarians in India

Librarians must step forward to:

  • Advocate for their role in national AI consultations.
  • Develop pilot projects that showcase responsible AI use in library services.
  • Build partnerships with universities, civil society, and government bodies to amplify their impact.

A Call to Action for the Government of India

To truly build an AI for All strategy, the Government of India should:

  • Recognise libraries as strategic partners in AI education and governance.
  • Fund training and digital infrastructure for libraries.
  • Mandate representation of library associations in AI policy consultations.

Final Word

AI is like electricity—it will power every sector of life in the coming years. Libraries are the transformers that can make this power safe, reliable, and accessible to all. If India wants an inclusive AI future, it must weave libraries into its national AI strategy.

Librarians: this is your moment to lead.

Government: this is your chance to listen.