Bridging Stacks and Circuits: Rethinking Library Science Curriculum for the AI Era

When I imagine redesigning the Library and Information Science curriculum for the age of AI, I see it semester by semester, like walking through the library stacks, each level taking me closer to new knowledge, but always with a familiar fragrance of books and values.

Semester 1 – The Roots
Here I would begin with Foundations of Library Science, Information Sources & Services, and alongside them introduce Introduction to AI and Data Literacy. Students should learn what algorithms are, how language models work, and why data matters. Just remember, this is not to turn them into computer scientists, but into informed professionals who can converse with both technology and community.

Semester 2 – The Tools
This stage could focus on Knowledge Organization, Cataloguing and Metadata, but reframed to show how AI assists in subject indexing, semantic search, and linked data. Alongside, a course on Digital Libraries and Discovery Systems will let them experiment with AI-powered platforms. By the way, assignments could include building small datasets and watching how AI classifies them — both the brilliance and the flaws.

Semester 3 – The Questions
Here ethics must enter the room strongly. A full course on AI, Ethics, and Information Policy is essential: patron privacy, copyright, algorithmic bias, transparency. At the same time, practical subjects like Digital Curation and Preservation should demonstrate how AI restores manuscripts, enhances images, or predicts file degradation. No wonder, students will begin to see AI as both a tool and a responsibility.

Semester 4 – The Bridge
I see this as a turning point: courses on Human–AI Interaction in Libraries, Information Literacy Instruction in the AI Era, and Data Visualization for Librarians. Students would learn to teach communities about AI tools, to verify machine answers, and to advocate for responsible use. A lab-based course could even simulate AI chatbots for reference desks, showing how humans must stay in the loop.

Semester 5 – The Expansion
By now, students are ready for deeper exploration. They could take electives like AI in Scholarly Communication (covering plagiarism detection, trend forecasting, citation networks) or AI for Community Engagement (local language NLP, accessibility, inclusive design). At the same time, collaboration with computer science or digital humanities departments could be formalized as joint workshops.

Semester 6 – The Future
The final stage should be open-ended: a Capstone Project in AI and Libraries, where each student selects a challenge — say, AI in cataloguing, or a chatbot for local history archives — and builds a small prototype or research study. Supplement this with an Internship or Residency in a library, tech lab, or cultural institution. Just imagine the confidence this gives: they graduate not as passive observers of AI but as active participants in shaping it.

And beyond…
I must not forget lifelong learning. The curriculum should be porous, allowing micro-credentials, short courses, and professional updates, because AI won’t stop evolving. In fact, it will keep testing us — and so our readiness must be continuous.

Looking back at this imagined curriculum, I feel it keeps the spirit of librarianship alive — service, access, ethics — while opening the doors to AI-driven realities. It is like adding a new wing to the old library: modern, glowing, full of machines perhaps, but still part of the same house of knowledge where the librarian remains a human guide.

From Stacks to Algorithms: Librarianship’s New Chapter

I remember when the word artificial intelligence first started appearing in library journals, it felt distant, almost experimental, as if it belonged more to labs than to our reading rooms. But just yesterday I came across a note from the American Library Association — they have now published guidance for school librarians on how to use AI in their everyday work [1]. No wonder, because librarians today are juggling so many roles: teachers, mentors, administrators, sometimes even technologists. The ALA’s advice is not about replacing them, but about helping them — streamlining tasks, improving communication, and yes, teaching students how to use AI ethically (plagiarism, citations, authorship, all those tricky parts).

By the way, it is not just policy notes. At Illinois, the journal Library Trends has just completed a two-part special issue on generative AI and libraries [2]. I skimmed through some of the abstracts: studies on how students use ChatGPT, how faculty perceive these tools, case studies of AI literacy instruction. This is serious scholarship, freely available, meant to guide practice. It reminds me of my early days in the profession, when such research gave us the language to argue for budgets and staff — and sometimes, just the courage to try new things.

And then, in Prague, librarians and researchers gathered under the banner of an “AI Knowledge Café,” more than 650 participants thinking together about the place of libraries in national AI strategies [3]. Imagine that: librarians not just adopting AI tools quietly, but sitting at the policy table, influencing how society will treat knowledge, ethics, and inclusion in the age of algorithms.

When I read all this, I feel both hopeful and cautious. Hopeful, because libraries are no longer seen as passive — we are active shapers of how AI unfolds. Cautious, because guidance and journals and cafés will mean little without real resources, training, and recognition, especially in countries like ours where libraries carry such a heavy heritage burden across many languages.

Still, I like to think that this is the beginning of a new chapter. Librarianship in the AI age is not a threat to our role, but a chance to re-articulate it. And in my heart, I feel grateful to be part of this transition — from catalog cards to chatbots, from dusty stacks to digital literacy.


References

[1] American Library Association. AI Guidance for School Librarians. Published September 2025. https://www.ala.org/news/2025/09/ai-guidance-school-librarians

[2] iSchool at Illinois. Library Trends Completes Two-Part Series on AI and Libraries. Published September 2025. https://ischool.illinois.edu/news-events/news/2025/09/library-trends-completes-two-part-series-ai-and-libraries

[3] ALA / IFLA. Libraries Towards a National AI Strategy (AI Knowledge Café). September 2025. https://connect.ala.org/acrl/discussion/libraries-towards-a-national-ai-strategy

Why India Needs Libraries at the Heart of Its National AI Strategy

Artificial Intelligence (AI) is rapidly reshaping how societies learn, work, and connect. As India builds its national AI strategy, there is an urgent need to ask: who will ensure that AI development remains ethical, inclusive, and accessible to every citizen? One powerful answer lies in our libraries.

Think about it. For decades, libraries have been safe spaces where anyone, regardless of background, could walk in and learn. Whether it was a student preparing for exams, a farmer checking market information, or a job seeker updating their resume, libraries have been bridges to opportunity. In the age of AI, they can once again be the guiding hand that helps people navigate complexity and change.

  • Guardians of Ethics and Accountability
    Libraries can champion transparency, fairness, and human oversight in AI systems adopted by public institutions.
  • Protectors of Privacy and Intellectual Freedom
    Library principles of confidentiality and equitable access align perfectly with India’s need for citizen-centric AI governance.
  • AI and Digital Literacy Hubs
    Just as libraries once taught computer literacy, they can now lead community workshops, training, and resources on AI literacy.
  • Upskilling the Workforce
    Librarians must be trained to use AI in cataloguing, research support, and community services—ensuring the profession adapts and thrives.
  • Bridging the Digital Divide
    Rural and underserved communities can access AI tools through public libraries, preventing exclusion from India’s digital transformation.
  • Policy Participation
    Libraries should have a seat at the table in national AI governance—bringing the voices of ordinary citizens into policy-making.

A Call to Action for Librarians in India

Librarians must step forward to:

  • Advocate for their role in national AI consultations.
  • Develop pilot projects that showcase responsible AI use in library services.
  • Build partnerships with universities, civil society, and government bodies to amplify their impact.

A Call to Action for the Government of India

To truly build an AI for All strategy, the Government of India should:

  • Recognise libraries as strategic partners in AI education and governance.
  • Fund training and digital infrastructure for libraries.
  • Mandate representation of library associations in AI policy consultations.

Final Word

AI is like electricity—it will power every sector of life in the coming years. Libraries are the transformers that can make this power safe, reliable, and accessible to all. If India wants an inclusive AI future, it must weave libraries into its national AI strategy.

Librarians: this is your moment to lead.

Government: this is your chance to listen.

Why India Needs to Develop Its Own GPU to Lead in AI

Artificial Intelligence (AI) is transforming the world, reshaping industries, economies, and societies at an unprecedented pace. For India, a nation with a burgeoning tech ecosystem and ambitions to become a global AI powerhouse, the path to leadership in AI hinges on addressing a critical bottleneck: access to high-performance computing infrastructure, particularly Graphics Processing Units (GPUs). While India has made strides in AI research, software development, and talent cultivation, its reliance on foreign GPUs poses a significant challenge. Developing indigenous GPUs is not just a matter of technological self-reliance but a strategic necessity for India to unlock its AI potential and secure its place in the global tech race.

The Central Role of GPUs in AI

GPUs are the backbone of modern AI systems. Unlike traditional Central Processing Units (CPUs), GPUs are designed for parallel processing, making them exceptionally efficient for the computationally intensive tasks that underpin AI, such as training deep learning models, running simulations, and processing vast datasets. From natural language processing models like those powering chatbots to computer vision systems enabling autonomous vehicles, GPUs are indispensable.

However, the global GPU market is dominated by a handful of players, primarily NVIDIA, AMD, and Intel, all based in the United States. These companies control the supply chain, set pricing, and dictate the pace of innovation. For a country like India, which is heavily investing in AI to address challenges in healthcare, agriculture, education, and governance, dependence on imported GPUs creates vulnerabilities in terms of cost, accessibility, and strategic autonomy.

The Case for Indigenous GPU Development

  1. Reducing Dependency on Foreign Technology
    India’s AI ambitions are constrained by its reliance on foreign GPUs. Supply chain disruptions, geopolitical tensions, or export restrictions could limit access to these critical components, hampering AI development. For instance, recent global chip shortages exposed the fragility of depending on foreign semiconductor supply chains. By developing its own GPUs, India can achieve technological sovereignty, ensuring that its AI ecosystem is not at the mercy of external forces.
  2. Cost Efficiency for Scalability
    GPUs are expensive, and their costs can be prohibitive for startups, research institutions, and small enterprises in India. Importing high-end GPUs involves significant expenses, including taxes and logistics, which drive up the cost of AI development. Indigenous GPUs, tailored to India’s needs and produced locally, could be more cost-effective, enabling broader access to high-performance computing for academia, startups, and government initiatives. This democratization of access would foster innovation and accelerate AI adoption across sectors.
  3. Customization for India-Specific Use Cases
    India’s AI challenges are unique. From multilingual natural language processing for its diverse linguistic landscape to AI-driven solutions for agriculture in resource-constrained environments, India’s needs differ from those of Western markets. Foreign GPUs are designed for generalized, high-end applications, often with a one-size-fits-all approach. Developing homegrown GPUs allows India to create hardware optimized for its specific AI use cases, such as low-power chips for edge computing in rural areas or specialized architectures for processing Indian language datasets.
  4. Boosting the Semiconductor Ecosystem
    Building GPUs would catalyze the growth of India’s semiconductor industry, which is still in its nascent stages. It would require investment in chip design, fabrication, and testing, creating a ripple effect across the tech ecosystem. This would not only create high-skill jobs but also position India as a player in the global semiconductor market. Programs like the India Semiconductor Mission (ISM) and partnerships with global foundries could be leveraged to support GPU development, fostering innovation and reducing reliance on foreign manufacturing.
  5. National Security and Strategic Autonomy
    AI is increasingly a matter of national security, with applications in defense, cybersecurity, and intelligence. Relying on foreign hardware raises concerns about potential vulnerabilities, such as backdoors or supply chain manipulations. Indigenous GPUs would give India greater control over its AI infrastructure, ensuring that sensitive applications are built on trusted hardware. This is particularly critical as India expands its use of AI in defense systems, smart cities, and critical infrastructure.

Challenges in Developing Indigenous GPUs

While the case for India developing its own GPUs is compelling, the path is fraught with challenges. Designing and manufacturing GPUs requires significant investment in research and development (R&D), access to advanced fabrication facilities, and a skilled workforce. The global semiconductor industry is highly competitive, with established players benefiting from decades of expertise and economies of scale.

India also faces a talent gap in chip design and fabrication. While the country produces millions of engineering graduates annually, specialized skills in semiconductor design are limited. Bridging this gap will require targeted education and training programs, as well as collaboration with global leaders in the field.

Moreover, building a GPU is not just about hardware. It requires an ecosystem of software, including drivers, frameworks, and developer tools, to make the hardware usable for AI applications. NVIDIA’s dominance, for example, stems not only from its hardware but also from its CUDA platform, which has become a de facto standard for AI development. India would need to invest in a robust software ecosystem to complement its GPUs, ensuring seamless integration with popular AI frameworks like TensorFlow and PyTorch.

Steps Toward Indigenous GPU Development

  1. Government Support and Investment
    The government should prioritize GPU development under initiatives like the India Semiconductor Mission. Subsidies, grants, and tax incentives for R&D in chip design and manufacturing can attract private investment and foster innovation. Public-private partnerships, like those with companies such as Tata and Reliance, could accelerate progress.
  2. Collaboration with Global Players
    While the goal is self-reliance, India can benefit from partnerships with global semiconductor leaders. Technology transfer agreements, joint ventures, and collaborations with companies like TSMC or Intel could provide access to cutting-edge fabrication processes and expertise.
  3. Building a Skilled Workforce
    India must invest in education and training programs focused on semiconductor design, AI hardware, and related fields. Partnerships with institutions like IITs and IISc, as well as international universities, can help develop a pipeline of talent. Initiatives like the Chips to Startup (C2S) program can be expanded to include GPU-specific training.
  4. Fostering an Ecosystem for Innovation
    India should create a supportive environment for GPU development by building a robust software ecosystem, encouraging open-source contributions, and supporting startups working on AI hardware. Hackathons, innovation challenges, and incubators focused on semiconductor design can spur grassroots innovation.
  5. Leveraging Existing Strengths
    India’s strength in software development and IT services can be a foundation for building GPU-compatible software stacks. Companies like Wipro, Infosys, and startups in the AI space can contribute to developing frameworks and tools that make indigenous GPUs viable for AI applications.

The Road Ahead

Developing indigenous GPUs is a bold but necessary step for India to achieve its AI ambitions. It aligns with the broader vision of “Atmanirbhar Bharat” (Self-Reliant India) and positions the country as a global leader in technology. While the journey will be challenging, the rewards are immense: reduced dependency, cost efficiency, customized solutions, and enhanced national security.

India has already shown its ability to leapfrog in technology, from UPI in digital payments to Aadhaar in biometric identification. By investing in GPU development, India can take a similar leap in AI, creating a future where its technological innovations are not just powered by India but also made in India. The time to act is now—India’s AI revolution depends on it.

How AI Tools Revolutionize Academic Research: Top 10 Free Tools to Boost Your Workflow

Artificial Intelligence (AI) is transforming academic research by streamlining repetitive tasks, uncovering insights, and enhancing productivity across every stage of the research process. From conducting literature reviews to analyzing data and polishing manuscripts, AI tools save time and improve efficiency. In this blog post, we explore how AI tools can elevate your research and highlight 10 free AI tools (with free plans) that support various research stages, complete with descriptions and links to get you started.


How AI Tools Enhance Academic Research

AI tools empower researchers by automating and optimizing key research tasks. Here’s how they help at different stages:

  • Literature Review: AI tools search vast academic databases, summarize papers, and identify connections between studies, making it easier to stay updated and find relevant sources.
  • Data Collection: Extract data from PDFs, texts, or online sources quickly, reducing manual effort.
  • Data Analysis: Analyze large datasets, identify patterns, and create visualizations with minimal coding.
  • Academic Writing: Improve clarity, grammar, and academic tone while generating outlines or paraphrasing content.
  • Citation Management: Automate citation formatting and reference organization across styles like APA or MLA.
  • Collaboration: Organize research materials, visualize citation networks, and share findings with teams.
  • Translation: Break language barriers by translating papers in real-time for global accessibility.

Now, let’s dive into the top 10 AI tools with free plans that can supercharge your academic research.


Top 10 Free AI Tools for Academic Research

1. Semantic Scholar

  • What It Does: A powerful AI-driven search engine for accessing over 200 million academic papers. It generates concise summaries, recommends related studies, and highlights connections between papers, perfect for literature reviews.
  • Free Plan: Completely free with unlimited searches and access to open-access papers (paywalled papers depend on your subscriptions).
  • Best For: Finding and summarizing relevant research quickly.
  • Website: semanticscholar.org

2. Elicit

  • What It Does: An AI research assistant that searches over 125 million papers, automates literature reviews, summarizes findings, and extracts data. It’s ideal for empirical research but less suited for theoretical studies.
  • Free Plan: Free access to search, summarization, and data extraction with no strict limits (verify results due to ~90% accuracy).
  • Best For: Streamlining literature reviews and data extraction.
  • Website: elicit.com

3. Research Rabbit

  • What It Does: A free tool that creates visual citation networks, suggests related papers, and organizes research collections. It’s great for exploring research connections and collaborating with peers.
  • Free Plan: Fully free with unlimited collections and paper additions (note: the interface may take some getting used to).
  • Best For: Organizing research and discovering related studies.
  • Website: researchrabbit.ai

4. Zotero

  • What It Does: A reference management tool that uses AI to suggest papers, organize citations, and generate bibliographies in various formats. It integrates seamlessly with word processors.
  • Free Plan: Free with unlimited reference storage; cloud syncing limited to 300 MB (expandable with paid plans).
  • Best For: Managing citations and references effortlessly.
  • Website: zotero.org

5. Scholarcy

  • What It Does: Summarizes research papers, articles, and book chapters into flashcards, highlighting key findings, limitations, and comparisons. It cuts screening time by up to 70%.
  • Free Plan: Summarize up to three documents per day; includes a browser extension for open-access and subscription-based papers.
  • Best For: Quickly digesting complex papers.
  • Website: scholarcy.com

6. ChatPDF

  • What It Does: Upload PDFs and interact with them via a chatbot to extract information or summarize content. It’s a time-saver for understanding dense research papers.
  • Free Plan: Upload two PDFs per day (up to 120 pages each) and ask 20 questions daily.
  • Best For: Extracting specific data from PDFs.
  • Website: chatpdf.com

7. Paperpal

  • What It Does: An AI writing assistant tailored for academia, offering grammar checks, paraphrasing, citation generation, and journal submission checks. It also supports literature searches and PDF analysis.
  • Free Plan: Basic grammar and style suggestions, 10 AI generations daily, and limited research features.
  • Best For: Polishing academic writing and translation.
  • Website: paperpal.com

8. NotebookLM

  • What It Does: A Google-powered tool that lets you upload up to 50 documents per notebook and generates summaries, audio overviews, or study guides. It’s perfect for organizing research materials.
  • Free Plan: Free with up to 100 notebooks, 50 sources per notebook, and daily limits on queries and audio summaries.
  • Best For: Summarizing and organizing research notes.
  • Website: notebooklm.google

9. AI2 Paperfinder

  • What It Does: Developed by the Allen Institute, this tool provides access to 8 million full-text papers and 108 million abstracts. It ranks search results by relevancy and exports citations in BibTeX or other formats.
  • Free Plan: Fully free with no limits on searches or citation exports.
  • Best For: Comprehensive literature searches and citation exports.
  • Website: paperfinder.allenai.org

10. DeepSeek

  • What It Does: A free large language model that answers research queries and synthesizes information. While not as advanced as premium models, it’s a solid option for general research assistance.
  • Free Plan: Fully free with no specific query limits (performance may vary for complex tasks).
  • Best For: General research queries on a budget.
  • Website: deepseek.com

Tips for Using AI Tools in Research

  • Verify Outputs: Tools like Elicit and ChatPDF may have errors (~90% accuracy for Elicit). Always cross-check results with original sources.
  • Combine Tools: Free plans have limitations (e.g., Scholarcy’s three-document cap). Use multiple tools to cover all research needs.
  • Maintain Integrity: AI should enhance, not replace, your critical thinking. Use these tools to boost productivity while ensuring originality.
  • Explore Paid Plans: If you hit free plan limits, consider paid upgrades for heavy use or advanced features.

Conclusion

AI tools are game-changers for academic research, helping you save time, uncover insights, and produce high-quality work. The 10 free tools listed above cover everything from literature reviews to citation management, making them accessible for students, researchers, and academics on a budget. Start exploring these tools today to streamline your research process and focus on what matters most—advancing knowledge.

Have a favorite AI research tool or need help with a specific research task? Share your thoughts in the comments below!

What Would S. R. Ranganathan Do in the Age of Generative AI if He Were Alive?

S.R. Ranganathan, the pioneering Indian librarian and mathematician, is best known for his Five Laws of Library Science and the development of the Colon Classification system. His work emphasised organising knowledge for accessibility, relevance, and user-centricity. If he were alive today, his approach to generative AI would likely be shaped by his knowledge organisation principles, focus on serving users, and innovative mindset. While it’s impossible to know exactly what he would have done, we can make informed speculations based on his philosophy and contributions.

  1. Applying the Five Laws to Generative AI
    Ranganathan’s Five Laws of Library Science (1931)—”Books are for use,” “Every reader his/her book,” “Every book its reader,” “Save the time of the reader,” and “The library is a growing organism“—could be adapted to generative AI systems, which are increasingly used to organize and generate knowledge. Here’s how he might have approached generative AI:
    Books are for use: Ranganathan would likely advocate for generative AI to be designed with practical utility in mind, ensuring it serves real-world needs, such as answering queries, generating content, or solving problems efficiently. He might push for AI interfaces that are intuitive and accessible to all users, much like a library’s catalog.
    Every reader his/her book: He would likely emphasise personalisation in AI systems, ensuring that generative AI delivers tailored responses to diverse users. For example, he might explore how AI could adapt outputs to different languages, cultural contexts, or knowledge levels, aligning with his goal of meeting individual user needs.
    Every book its reader: Ranganathan might focus on making AI-generated content discoverable and relevant, developing classification systems or metadata frameworks to organise AI outputs so users can easily find what they need. He could propose taxonomies for AI-generated text, images, or code to enhance retrieval.
    Save the time of the reader: He would likely prioritise efficiency, advocating for AI systems that provide accurate, concise, and relevant outputs quickly. He might critique models that produce verbose or irrelevant responses and push for prompt engineering techniques to streamline interactions.
    The library is a growing organism: Ranganathan would recognise generative AI as a dynamic, evolving system. He might encourage continuous updates to AI models, integrating new data and user feedback to keep them relevant, much like a library evolves with new books and technologies.
  2. Developing Classification Systems for AI Outputs
    Ranganathan’s Colon Classification system was a faceted, flexible approach to organising knowledge, allowing for complex relationships between subjects. He might apply this to generative AI by:
    Creating a taxonomy for AI-generated content: He could develop a faceted classification system to categorize outputs like text, images, or code based on attributes such as topic, format, intent, or audience. For example, a generated article could be tagged with facets like “subject: science,” “tone: formal,” or “purpose: education.”
    Improving information retrieval: Ranganathan might work on algorithms to enhance the discoverability of AI-generated content, ensuring users can navigate vast outputs efficiently. He could integrate his classification principles into AI search systems, making them more precise and context-aware.
    Addressing ethical concerns: He would likely consider the ethical implications of AI-generated content, such as misinformation or bias, and propose frameworks to tag or filter outputs for reliability and fairness, aligning with his user-centric philosophy.
  3. Advancing AI for Libraries and Knowledge Management
    As a librarian, Ranganathan would likely focus on how generative AI could enhance library services and knowledge management:
    AI-powered library assistants: He might advocate for AI chatbots to assist patrons in finding resources, answering queries, or recommending materials, saving librarians’ time and improving user experience. For example, an AI could use natural language processing to interpret complex research queries and suggest relevant books or articles.
    Automating cataloguing: Ranganathan could explore generative AI for automating metadata creation or cataloguing, using models to summarise texts, extract keywords, or classify resources according to his Colon Classification system. This would align with his goal of saving time and improving access.
    Preserving cultural knowledge: Given his work in India, he might use AI to digitise and generate summaries of regional texts, manuscripts, or oral traditions, making them accessible globally while preserving cultural context.
  4. Ethical and Social Considerations
    Ranganathan’s user-focused philosophy suggests he would be concerned with the ethical and societal impacts of generative AI, as noted in sources discussing AI’s risks like misinformation and job displacement. He might:
    Promote equitable access: He would likely advocate for open-source AI models or affordable tools to ensure generative AI benefits diverse populations, not just affluent institutions or countries.
    Address misinformation: Ranganathan might develop guidelines for libraries to educate users about AI-generated content, helping them distinguish reliable outputs from “hallucinations” or deepfakes.
    Mitigate job displacement: While recognising AI’s potential to automate tasks, he might propose training programs for librarians to adapt to AI-driven workflows, ensuring human expertise remains central.
  5. Innovating with Generative AI
    Ranganathan was an innovator, so he might experiment with generative AI to push boundaries in knowledge organisation:
    – AI for creative knowledge synthesis: He could use AI to generate new insights by synthesising existing literature, creating summaries or interdisciplinary connections that human researchers might overlook.
    AI in education: Drawing from his focus on accessibility, he might develop AI tools to generate educational content tailored to different learning styles, supporting students and educators.
    Collaborative AI systems: He might propose collaborative platforms where AI and librarians work together, with AI handling data-intensive tasks and humans providing critical judgment, aligning with his belief in human-centric systems.
  6. Critiquing and Shaping AI Development
    Ranganathan’s analytical mindset suggests he would critically examine generative AI’s limitations, such as data dependence, bias, and lack of true creativity. He might:
    Push for transparency: Advocate for clear documentation of AI training data and processes, ensuring users understand how outputs are generated.
    Enhance AI explainability: Develop frameworks to make AI decisions more interpretable, helping users trust and verify generated content.
    Focus on sustainability: Given the environmental impact of AI training, he might explore energy-efficient models or advocate for sustainable practices in AI development.

Conclusion
If S.R. Ranganathan were alive today, he would likely embrace generative AI as a tool to enhance knowledge organisation and accessibility while critically addressing its ethical and practical challenges. He would adapt his Five Laws to AI, develop classification systems for AI outputs, and leverage AI to improve library services and education. His focus would remain on serving users, ensuring equity, and advancing knowledge management in an AI-driven world. His innovative spirit and user-centric philosophy would make him a key figure in shaping generative AI’s role in libraries and beyond.

Chat with PDF files: AI Tools to Ask Questions to PDFs for Summaries and Insights

In today’s digital world, we are inundated with information, much of it locked away in PDF documents. Whether you are a student combing through research papers, a professional analysing detailed reports, or someone simply trying to extract crucial information from a large PDF, you’ve likely felt overwhelmed. But what if I told you that you could actually chat with those PDFs? Thanks to recent advancements in AI, this once far-fetched idea is now a reality.

The Power of AI in Document Analysis

AI-powered tools are transforming how we engage with PDFs, allowing us to swiftly access information, summarise content, and even query documents directly. These tools combine several cutting-edge technologies:

  1. Text Extraction: Utilising Optical Character Recognition (OCR) for scanned documents and PDF parsing libraries for digital PDFs.
  2. Natural Language Processing (NLP): AI analyses the extracted text to grasp content, structure, and context.
  3. Entity Recognition: Identifies specific entities such as names, dates, and organisations.
  4. Chat Integration: AI generates responses based on user queries and the document’s content. Top AI Tools for PDF Interaction

Let’s explore some of the leading tools in this field:

  1. ChatPDF

ChatPDF allows you to upload any PDF and ask questions about its content. Ideal for textbooks, research papers, or business documents, it quickly generates answers based on the data within the PDF. It’s also available as a plugin within ChatGPT, making it even more accessible.

  1. PDF.ai

PDF.ai specialises in multi-language PDF interaction, making it perfect for users working across different languages. It enables dynamic conversations with documents, breaking down language barriers in document analysis.

  1. GPT-PDF by Humata

Built on GPT technology, this tool offers deep interaction with complex files like reports or whitepapers. It’s particularly useful for users needing to analyse and generate insights from technical documents.

  1. Ask Your PDF

Ask Your PDF stands out with its powerful semantic search capability, excelling at analysing multiple documents simultaneously. This makes it an excellent choice for comprehensive research projects that require synthesising information from various sources.

  1. Adobe Acrobat AI Assistant

Integrated into the widely used Adobe Acrobat, this AI assistant enhances document interaction while retaining Acrobat’s traditional editing capabilities. It’s a great option for users already familiar with the Adobe ecosystem.

  1. PDFgear (Open-Source Option)

For those who prefer open-source solutions, PDFgear offers notable advantages:

  • Its open-source framework ensures transparency and customisation.
  • It supports interactions with multiple PDF files in a single session.
  • It is compatible with various AI backends like OpenAI and Anthropic.
  • Local deployment options provide greater privacy and security.
  • Available through both a web interface and command-line option. The Future of Document Interaction

These AI-powered PDF tools are just the beginning. As natural language processing and machine learning technologies continue to evolve, we can expect even more advanced document interaction capabilities. Imagine AI assistants that not only answer questions but also provide personalised insights, generate summaries tailored to your needs, or even create new documents based on the information contained within your PDFs.

Conclusion

The days of tediously scrolling through lengthy PDFs or relying solely on basic search functions are behind us. With these AI tools, we are entering an era where documents become interactive, responsive resources. Whether you’re a student, researcher, professional, or anyone who frequently works with PDFs, these tools can significantly streamline your workflow, making it easier than ever to extract and analyse information.

Have you tried any of these PDF tools? What’s been your experience? The world of AI-assisted document analysis is rapidly evolving, and it’s an exciting time to explore these new capabilities. As AI continues to push the boundaries of document interaction, the future promises even more innovative and powerful tools.

AI Tools in Education: Empowering Learning and Creativity

In recent years, artificial intelligence (AI) has made significant strides in various fields, and education is no exception. The integration of AI tools in education is revolutionising how we learn, teach, and collaborate. This blog post explores the exciting world of AI in education, focusing on different types of AI tools and their applications, as well as discussing the responsible use of this powerful technology.

Understanding Generative AI

Generative AI is a branch of artificial intelligence that focuses on creating new content such as text, images, audio, and video by learning from existing data. Unlike traditional AI, which primarily analyses and predicts outcomes based on input data, generative AI models can produce original outputs that mimic the characteristics of their training data.

This capability has led to significant interest and investment across various sectors, with tools like ChatGPT, DALL-E, and Midjourney demonstrating practical uses in text, image, audio, and video generation.

 AI Tools for Various Educational Purposes

 1. Chatbots and Text Generation

Several AI-powered chatbots and text-generation tools are available to assist students and educators:

  • ChatGPT: A versatile conversational AI for writing, coding, and tutoring.
  • Claude: Designed for various tasks with a focus on safety and ethical AI behaviour.
  • Google’s Gemini: A multimodal AI capable of understanding and generating text, images, videos, and audio.
  • Microsoft Copilot: Integrates into the Microsoft ecosystem for context-aware assistance.
  • Perplexity: An AI-powered search and answer engine.
  • Pi: An AI assistant designed for open-ended conversations and emotional support.
  • Grok: Unique AI with real-time access to X (formerly Twitter) for current events analysis.

For more specific text generation tasks, tools like HyperWrite, Smart Copy AI, Simplified AI Writer, Quillbot, and Copy.AI offer various features to improve writing efficiency and quality.

 2. Research Assistance

AI tools can significantly enhance the research process:

  • Consensus AI: Scans millions of scientific papers to find relevant ones based on your query.
  • Connected Papers and Litmaps: Visualize research areas and discover related papers.
  • Research Rabbit: Assists with literature mapping and paper recommendations.
  • Scite: Analyses and compares citations across research papers.
  • Open Knowledge Maps: Emphasizes open access content and provides research topic overviews.
  • Paper Digest: Helps in writing literature reviews by extracting essential information from papers.
  • PDFgear: Offers AI-powered PDF manipulation and information extraction.
  • Paperpal and Jenni: Provide specialized AI-powered writing assistance for academic and scientific writing.

 3. Writing Improvement

  • Grammarly: A free AI writing assistant that provides personalized suggestions to enhance your text across various platforms.
  • Trinka: Designed specifically for academic and technical writing, focusing on clarity and precision.

 4. Learning and Teaching

  • Summarize.tech: Uses AI to summarize lengthy YouTube videos, condensing hours of content into key points.
  • Quizlet: An AI-powered learning platform offering interactive flashcards, practice tests, and study activities.
  • Curipod: Helps teachers create engaging lessons with interactive activities.
  • ClassPoint: An all-in-one teaching and student engagement tool that works within PowerPoint.
  • Yippity: Converts information into various types of questions for learning and assessment.
  • Coursebox: An AI-powered platform for creating and managing online courses.
  • Goodgrade AI: Assists in writing essays, summarizing documents, and generating citations.

 5. Collaboration Tools

  • Otter.ai: Transcribes speech in real-time and offers collaboration features for document sharing and management.
  • Notion: A versatile digital workspace with AI capabilities for organizing research materials, managing projects, and facilitating collaboration.

 Responsible Use of AI in Education

While AI tools offer tremendous benefits, it is crucial to use them responsibly. Here are some key considerations:

1. Avoid Plagiarism: Always review AI-generated content carefully, rephrase ideas in your own words, and cite AI-generated content when necessary.

2. Maintain Academic Integrity: Use AI as a brainstorming tool, not a shortcut for entire projects. Be transparent about AI usage in your work.

3. Protect Privacy: Read terms of service, avoid sharing sensitive information, and use AI tools that prioritize user privacy.

4. Apply Human Oversight: AI is not always accurate and may lack context or nuance. Verify its output, especially in critical fields like law, medicine, or academia.

5. Set Boundaries: Find a balance where AI enhances your creativity but does not replace your effort. The goal is to learn and develop your own skills.

6. Follow Institutional Guidelines: Adhere to your institution’s policies on AI use to maintain integrity and trust.

 Conclusion

Generative AI is transforming education by offering powerful tools for learning, research, writing, and collaboration. By using these tools responsibly and ethically, students and educators can unlock new levels of creativity and efficiency in their academic pursuits. As AI continues to evolve, it is exciting to imagine the future possibilities in education and beyond.

Remember, while AI can be an invaluable assistant, it is your unique human perspective, critical thinking, and creativity that will truly set your work apart. Embrace AI as a tool to enhance your abilities, not replace them, and you will be well-equipped to thrive in the AI-augmented future of education.

Exploring Generative AI: ChatGPT and Its Top Alternatives

Generative AI has become a transformative force in the tech world, reshaping how we interact with technology and create content. In this blog post, we’ll dive into what Generative AI is, spotlight ChatGPT, and review some of the leading alternatives available today

What is Generative AI?

Generative AI is a specialized field within artificial intelligence dedicated to creating new content—be it text, images, audio, or video. Unlike traditional AI, which focuses primarily on analyzing existing data and making predictions, Generative AI models can produce original outputs that closely mirror the characteristics of the data they were trained on. This capability has sparked significant interest and investment across various industries, from content creation to scientific research.

Generative AI leverages sophisticated algorithms and vast datasets to generate content that is often indistinguishable from human-created work. This has led to a surge in applications, including AI-driven art, automated writing assistants, and even AI-generated music. As businesses and individuals seek innovative ways to harness these capabilities, the field continues to evolve rapidly.

ChatGPT: A Deep Dive

ChatGPT, developed by OpenAI, stands out as one of the most versatile and well-known generative AI tools. Launched initially as a conversational AI, ChatGPT excels in understanding and generating human-like text. Its applications range from writing assistance and coding support to tutoring and customer service.

Key Features of ChatGPT:

  • Versatility: Capable of handling a wide range of tasks, including text generation, problem-solving, and interactive conversation.
  • User-Friendly Interface: Designed for ease of use with a straightforward chat-based interface.
  • Regular Updates: OpenAI frequently updates ChatGPT to improve performance and expand its capabilities.
  • Free and Paid Versions: Offers both free and subscription-based models, providing various levels of access to features.

Despite its strengths, ChatGPT does have limitations. Users may encounter occasional inaccuracies, and there are ongoing concerns about data privacy and the ethical use of AI-generated content.

Top Alternatives to ChatGPT

As AI technology evolves, several competitors have emerged, offering unique features and capabilities. Here’s a look at some of the top alternatives to ChatGPT:

1. Claude by Anthropic

Claude is designed with a strong emphasis on safety and ethical AI behavior. It excels in handling complex, multi-step tasks, making it ideal for research, analysis, and creative writing. Claude’s thoughtful and nuanced responses set it apart, although it may not be as widely known or available as some of its competitors.

Key Features:

  • Safety and Ethics: Focuses on ethical AI behaviour and safety.
  • Complex Task Handling: Suitable for intricate tasks requiring detailed analysis.

2. Google’s Gemini

Google’s Gemini pushes the boundaries of AI with its multimodal capabilities, enabling it to understand and generate text, images, videos, and audio. Integrated into Google’s extensive ecosystem, Gemini is designed for advanced search, content creation, and scientific research. Its full potential is still being realized, but it offers powerful tools for diverse applications.

Key Features:

  • Multimodal Capabilities: Handles various types of media.
  • Google Integration: Leveraging Google’s resources for enhanced functionality.

3. Microsoft Copilot

Microsoft Copilot integrates seamlessly into Microsoft products such as Word, Excel, and Visual Studio, providing context-aware assistance. It simplifies complex tasks, from document creation to data analysis, within the familiar Microsoft environment. However, its benefits are mainly limited to users within the Microsoft ecosystem and may require a subscription for full access.

Key Features:

  • Context-Aware Assistance: Provides help based on the context of the task.
  • Microsoft Integration: Works within Microsoft apps and tools.

4. Perplexity

Perplexity combines web search with AI-generated insights, offering a unique blend of search engine functionality and conversational AI. It provides transparency by including sources and supports a conversational interface for follow-up questions, making it ideal for quick research and fact-checking.

Key Features:

  • Transparency: Includes sources for AI-generated insights.
  • Conversational Interface: Allows for interactive follow-up questions.

5. Pi by Inflection AI

Pi is designed for open-ended conversations and emotional support. Emphasizing personality and relatability, Pi is a great companion for personal chats, brainstorming, and general knowledge discussions. Its conversational abilities shine in creating engaging interactions, though it may not be as effective for highly technical tasks.

Key Features:

  • Emotional Support: Focuses on personality and engagement.
  • Open-Ended Conversations: Ideal for casual and brainstorming discussions.

6. Grok by xAI

Developed by Elon Musk’s xAI, Grok provides real-time access to X (formerly Twitter), offering humor and analysis on current events. While it’s great for creative problem-solving and entertaining conversations, its reliance on X for data can introduce bias, making it less suitable for some professional settings.

Key Features:

  • Real-Time Information: Access to up-to-date information from X.
  • Distinct Personality: Known for its humor and engaging style.

7. Meta AI

Meta AI encompasses a range of models and tools developed by Meta, including language, vision, and speech models. Open-source offerings like LLaMA demonstrate Meta’s versatility in natural language processing and computer vision. Despite its broad capabilities, Meta’s AI offerings can feel less cohesive and raise privacy concerns.

Key Features:

  • Versatile Models: Includes tools for various AI applications.
  • Open-Source Options: Features models like LLaMA for experimentation.

8. Poe by Quora

Poe by Quora allows users to access multiple AI models within a single chat interface. It’s designed for users to compare outputs and create custom bots, making it a playground for exploring AI capabilities. While it offers a unique platform for experimentation, its reliance on third-party models may limit its depth compared to dedicated tools.

Key Features:

  • Multi-Model Access: Compare and experiment with various AI models.
  • User-Friendly Interface: Easy to navigate and explore different AI capabilities.

Conclusion

Generative AI has moved beyond being just a buzzword to become an integral tool in our daily lives, aiding in everything from content creation to problem-solving. Whether you’re looking for an AI assistant to enhance productivity, support creative endeavours, or provide emotional support, there’s a range of tools available to suit your needs. Each AI model has its own strengths and potential drawbacks, so it’s worth exploring which one aligns best with your specific requirements.

Installing WINISIS on current 32-Bit or 64-Bit versions of Windows

Introduction:

Winisis is a software developed by UNESCO (United Nations Educational, Scientific and Cultural Organization) for managing and retrieving information stored in textual databases. It is a Windows-based version of the CDS/ISIS software, widely used in libraries, documentation centres, and similar institutions for creating and maintaining bibliographic databases.

Winisis is different from a relational database management system (RDBMS). It is based on a text-oriented database mode. It uses the CDS/ISIS (Computerized Documentation Service/Integrated Set of Information Systems) data model, which is designed to handle bibliographic and textual data rather than the structured data typically managed by relational databases. Data is stored in a format that consists of records, fields, and subfields, but it does not support the relational model’s tables, rows, and columns with defined relationships and constraints. This makes Winisis particularly suited for managing unstructured or semi-structured textual information, such as bibliographic records in libraries and documentation centres, rather than for applications requiring complex relational data handling.

Key Features of Winisis:

  1. Database Management: Allows for the creation, updating, and maintenance of textual and bibliographic databases.
  2. Data Retrieval: Provides powerful search capabilities, including boolean searches, to retrieve information efficiently.
  3. User-Friendly Interface: Designed to be easy to use with a graphical interface suitable for Windows environments.
  4. Flexible Data Entry: Supports customisable data entry worksheets tailored to the specific needs of different databases.
  5. Multilingual Support: Capable of handling multiple languages, making it suitable for international use.
  6. Import/Export Functionality: Facilitates the exchange of data with other software systems through import/export features.
  7. Customization: Allows for various levels of customization in terms of data structure, search formats, and display formats.

Legacy Software:

Unfortunately, Winisis is no longer actively supported or updated by UNESCO. The software, built for 16-bit machines, has not seen any updates for the last two decades. The lack of official updates means that it is no longer compatible with newer operating systems or technologies. Users looking for alternatives often consider other library and information management systems such as Koha, Evergreen, or other Integrated Library Systems (ILS) that are actively maintained and offer more modern features.

Continued use of Winisis:

People still continue to use Winisis for several reasons:

  1. Legacy Data: Many institutions have extensive databases in Winisis, making migration costly and complex.
  2. Familiarity: Long-term users are accustomed to Winisis, reducing retraining needs.
  3. Specific Features: Tailored features for bibliographic management make it irreplaceable for some.
  4. Cost: As a free tool provided by UNESCO, it remains a cost-effective option for resource-limited institutions.
  5. Teaching in Library Science: Winisis is still taught in some library science programs to provide historical context and foundational knowledge in database management.
  6. Low Resource Requirement: Winisis runs efficiently on older hardware and operating systems.

Installation on Modern OS:

Installing Winisis on modern operating systems can be challenging due to its outdated software architecture. Here are some methods:

  1. Compatibility Mode: Run the installation file in compatibility mode for older versions of Windows (e.g., Windows XP or Windows 7).
  2. Virtual Machines: Use a virtual machine (VM) running an older version of Windows that supports Winisis. Software like VMware or VirtualBox can help set this up.
  3. Wine on Linux/Mac: For Linux or Mac users, use Wine to run Winisis, although compatibility can vary.

These methods help ensure that Winisis can run despite its lack of updates for modern systems.

Better Installation Methods:

Here I will explain better installation methods that have been tested by myself. Depending on the machine architecture, I suggest the following two methods:

  1. NTVDM on 32-bit Windows 10.
  2. WINEVDM on 64-bit Windows.

NTVDM on 32-bit Windows 10:

This method uses the NTVDM [1] feature of Windows 10. NTVDM, or the NT Virtual DOS Machine, is a system component introduced in 1993 for all IA-32 editions of the Windows NT family (not included with 64-bit versions of the OS). This component allows the execution of 16-bit Windows applications on 32-bit Windows operating systems, as well as the execution of both 16-bit and 32-bit DOS applications. It is very similar to installing Winisis on Windows 2000, XP and NT by placing a ctl3d.dll file in the windows/system directory.

Steps:

  1. Mount Winisis CD or ISO file. In case, Winisis CD or ISO is not available, you may download Winisis installation files [2].
  2. Explore files to reach the directory containing Install.exe.
  3. Double-click on Install.exe.
  4. Windows will pop up an alert “An app on your PC needs the following Windows feature: NTVDM. Install this feature. Skip this Installation.”
  5. Select “Install this feature”. Windows will search for files and install the feature.
  6. Installation of Winisis will now proceed. Select various options for Winisis installation. It would suggest default options, which are fine. That will complete Winisis installation in a directory named “WINISIS”.
  7. Restart the system.
  8. Now explore the WINISIS directory and look for WISIS.EXE. Execute it to start up Winisis.
  9. In case you get the error – “Can’t run 16-bit Windows program …”, press OK to close WISIS.
  10. Download ctl3d.dll [3] file and place it in the Windows/System directory. Replace the existing file if any with the same name.
  11. WISIS should now work fine. Create a shortcut icon for WISIS and place it on the desktop.

WINEVDM on 64-bit Windows:

This method uses the WINEVDM [4]. Otvdm or WineVDM, is an open-source compatibility layer and user-mode emulator developed for 64-bit Windows. It consists of WineVDM, a component of Wine that serves the same role the NTVDM does on 32-bit Windows. This method has been tested to work on 64-bit versions of Windows 10 and Windows 11.  

Steps:

  1. We would require Microsoft Visual C++ Redistributable Version [5] for the x86 Architecture. It is important to note to download an X86 architecture version [6] although we are going to install it on an x64 architecture machine.  
  2. Install Microsoft Visual C++ Redistributable Version.
  3. Download the latest version of WINEVDM [4]. Extract the contents of the downloaded zip file and execute the install file.
  4. Mount Winisis CD or ISO file or download Winisis installation files [2].
  5. Explore files to reach the directory containing Install.exe.
  6. Double-click on Install.exe.
  7. Winisis will start installing and on completion there will be a directory named “WINISIS”.
  8. Now explore the WINISIS directory and look for WISIS.EXE.
  9. Create a shortcut icon for WISIS and place it on the desktop.
  10. Winisis should work fine.

REFERENCES/ LINKS:

  1. NTVDM and 16-bit app support. https://learn.microsoft.com/en-us/windows/compatibility/ntvdm-and-16-bit-app-support [Accessed – 30th July 2024].
  2. Winisis Version 1.4 Installation Files. https://drive.google.com/file/d/1erLfII8k0o5M74c–IXJ5ZahSD0RpTIT/view?usp=sharing  [Accessed 30th July 2014]
  3. CTL3D.DLL file. https://drive.google.com/file/d/1lcmAxDtr_YFq_YtWynrrDKMlcnuRgIdD/view?usp=sharing  [Accessed 30th July 2024].
  4. WINEVDM. https://github.com/otya128/winevdm/releases/tag/v0.9.0  [Accessed 30th July 2024].
  5. Microsoft Visual C++ Redistributable Version. https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist  [Accessed 30th July 2024].
  6. Microsoft Visual C++ Redistributable Version for X86 Architecture. https://aka.ms/vs/17/release/vc_redist.x86.exe  [Accessed 30th July 2024].