Why India Needs to Develop Its Own GPU to Lead in AI

Artificial Intelligence (AI) is transforming the world, reshaping industries, economies, and societies at an unprecedented pace. For India, a nation with a burgeoning tech ecosystem and ambitions to become a global AI powerhouse, the path to leadership in AI hinges on addressing a critical bottleneck: access to high-performance computing infrastructure, particularly Graphics Processing Units (GPUs). While India has made strides in AI research, software development, and talent cultivation, its reliance on foreign GPUs poses a significant challenge. Developing indigenous GPUs is not just a matter of technological self-reliance but a strategic necessity for India to unlock its AI potential and secure its place in the global tech race.

The Central Role of GPUs in AI

GPUs are the backbone of modern AI systems. Unlike traditional Central Processing Units (CPUs), GPUs are designed for parallel processing, making them exceptionally efficient for the computationally intensive tasks that underpin AI, such as training deep learning models, running simulations, and processing vast datasets. From natural language processing models like those powering chatbots to computer vision systems enabling autonomous vehicles, GPUs are indispensable.

However, the global GPU market is dominated by a handful of players, primarily NVIDIA, AMD, and Intel, all based in the United States. These companies control the supply chain, set pricing, and dictate the pace of innovation. For a country like India, which is heavily investing in AI to address challenges in healthcare, agriculture, education, and governance, dependence on imported GPUs creates vulnerabilities in terms of cost, accessibility, and strategic autonomy.

The Case for Indigenous GPU Development

  1. Reducing Dependency on Foreign Technology
    India’s AI ambitions are constrained by its reliance on foreign GPUs. Supply chain disruptions, geopolitical tensions, or export restrictions could limit access to these critical components, hampering AI development. For instance, recent global chip shortages exposed the fragility of depending on foreign semiconductor supply chains. By developing its own GPUs, India can achieve technological sovereignty, ensuring that its AI ecosystem is not at the mercy of external forces.
  2. Cost Efficiency for Scalability
    GPUs are expensive, and their costs can be prohibitive for startups, research institutions, and small enterprises in India. Importing high-end GPUs involves significant expenses, including taxes and logistics, which drive up the cost of AI development. Indigenous GPUs, tailored to India’s needs and produced locally, could be more cost-effective, enabling broader access to high-performance computing for academia, startups, and government initiatives. This democratization of access would foster innovation and accelerate AI adoption across sectors.
  3. Customization for India-Specific Use Cases
    India’s AI challenges are unique. From multilingual natural language processing for its diverse linguistic landscape to AI-driven solutions for agriculture in resource-constrained environments, India’s needs differ from those of Western markets. Foreign GPUs are designed for generalized, high-end applications, often with a one-size-fits-all approach. Developing homegrown GPUs allows India to create hardware optimized for its specific AI use cases, such as low-power chips for edge computing in rural areas or specialized architectures for processing Indian language datasets.
  4. Boosting the Semiconductor Ecosystem
    Building GPUs would catalyze the growth of India’s semiconductor industry, which is still in its nascent stages. It would require investment in chip design, fabrication, and testing, creating a ripple effect across the tech ecosystem. This would not only create high-skill jobs but also position India as a player in the global semiconductor market. Programs like the India Semiconductor Mission (ISM) and partnerships with global foundries could be leveraged to support GPU development, fostering innovation and reducing reliance on foreign manufacturing.
  5. National Security and Strategic Autonomy
    AI is increasingly a matter of national security, with applications in defense, cybersecurity, and intelligence. Relying on foreign hardware raises concerns about potential vulnerabilities, such as backdoors or supply chain manipulations. Indigenous GPUs would give India greater control over its AI infrastructure, ensuring that sensitive applications are built on trusted hardware. This is particularly critical as India expands its use of AI in defense systems, smart cities, and critical infrastructure.

Challenges in Developing Indigenous GPUs

While the case for India developing its own GPUs is compelling, the path is fraught with challenges. Designing and manufacturing GPUs requires significant investment in research and development (R&D), access to advanced fabrication facilities, and a skilled workforce. The global semiconductor industry is highly competitive, with established players benefiting from decades of expertise and economies of scale.

India also faces a talent gap in chip design and fabrication. While the country produces millions of engineering graduates annually, specialized skills in semiconductor design are limited. Bridging this gap will require targeted education and training programs, as well as collaboration with global leaders in the field.

Moreover, building a GPU is not just about hardware. It requires an ecosystem of software, including drivers, frameworks, and developer tools, to make the hardware usable for AI applications. NVIDIA’s dominance, for example, stems not only from its hardware but also from its CUDA platform, which has become a de facto standard for AI development. India would need to invest in a robust software ecosystem to complement its GPUs, ensuring seamless integration with popular AI frameworks like TensorFlow and PyTorch.

Steps Toward Indigenous GPU Development

  1. Government Support and Investment
    The government should prioritize GPU development under initiatives like the India Semiconductor Mission. Subsidies, grants, and tax incentives for R&D in chip design and manufacturing can attract private investment and foster innovation. Public-private partnerships, like those with companies such as Tata and Reliance, could accelerate progress.
  2. Collaboration with Global Players
    While the goal is self-reliance, India can benefit from partnerships with global semiconductor leaders. Technology transfer agreements, joint ventures, and collaborations with companies like TSMC or Intel could provide access to cutting-edge fabrication processes and expertise.
  3. Building a Skilled Workforce
    India must invest in education and training programs focused on semiconductor design, AI hardware, and related fields. Partnerships with institutions like IITs and IISc, as well as international universities, can help develop a pipeline of talent. Initiatives like the Chips to Startup (C2S) program can be expanded to include GPU-specific training.
  4. Fostering an Ecosystem for Innovation
    India should create a supportive environment for GPU development by building a robust software ecosystem, encouraging open-source contributions, and supporting startups working on AI hardware. Hackathons, innovation challenges, and incubators focused on semiconductor design can spur grassroots innovation.
  5. Leveraging Existing Strengths
    India’s strength in software development and IT services can be a foundation for building GPU-compatible software stacks. Companies like Wipro, Infosys, and startups in the AI space can contribute to developing frameworks and tools that make indigenous GPUs viable for AI applications.

The Road Ahead

Developing indigenous GPUs is a bold but necessary step for India to achieve its AI ambitions. It aligns with the broader vision of “Atmanirbhar Bharat” (Self-Reliant India) and positions the country as a global leader in technology. While the journey will be challenging, the rewards are immense: reduced dependency, cost efficiency, customized solutions, and enhanced national security.

India has already shown its ability to leapfrog in technology, from UPI in digital payments to Aadhaar in biometric identification. By investing in GPU development, India can take a similar leap in AI, creating a future where its technological innovations are not just powered by India but also made in India. The time to act is now—India’s AI revolution depends on it.

How AI Tools Revolutionize Academic Research: Top 10 Free Tools to Boost Your Workflow

Artificial Intelligence (AI) is transforming academic research by streamlining repetitive tasks, uncovering insights, and enhancing productivity across every stage of the research process. From conducting literature reviews to analyzing data and polishing manuscripts, AI tools save time and improve efficiency. In this blog post, we explore how AI tools can elevate your research and highlight 10 free AI tools (with free plans) that support various research stages, complete with descriptions and links to get you started.


How AI Tools Enhance Academic Research

AI tools empower researchers by automating and optimizing key research tasks. Here’s how they help at different stages:

  • Literature Review: AI tools search vast academic databases, summarize papers, and identify connections between studies, making it easier to stay updated and find relevant sources.
  • Data Collection: Extract data from PDFs, texts, or online sources quickly, reducing manual effort.
  • Data Analysis: Analyze large datasets, identify patterns, and create visualizations with minimal coding.
  • Academic Writing: Improve clarity, grammar, and academic tone while generating outlines or paraphrasing content.
  • Citation Management: Automate citation formatting and reference organization across styles like APA or MLA.
  • Collaboration: Organize research materials, visualize citation networks, and share findings with teams.
  • Translation: Break language barriers by translating papers in real-time for global accessibility.

Now, let’s dive into the top 10 AI tools with free plans that can supercharge your academic research.


Top 10 Free AI Tools for Academic Research

1. Semantic Scholar

  • What It Does: A powerful AI-driven search engine for accessing over 200 million academic papers. It generates concise summaries, recommends related studies, and highlights connections between papers, perfect for literature reviews.
  • Free Plan: Completely free with unlimited searches and access to open-access papers (paywalled papers depend on your subscriptions).
  • Best For: Finding and summarizing relevant research quickly.
  • Website: semanticscholar.org

2. Elicit

  • What It Does: An AI research assistant that searches over 125 million papers, automates literature reviews, summarizes findings, and extracts data. It’s ideal for empirical research but less suited for theoretical studies.
  • Free Plan: Free access to search, summarization, and data extraction with no strict limits (verify results due to ~90% accuracy).
  • Best For: Streamlining literature reviews and data extraction.
  • Website: elicit.com

3. Research Rabbit

  • What It Does: A free tool that creates visual citation networks, suggests related papers, and organizes research collections. It’s great for exploring research connections and collaborating with peers.
  • Free Plan: Fully free with unlimited collections and paper additions (note: the interface may take some getting used to).
  • Best For: Organizing research and discovering related studies.
  • Website: researchrabbit.ai

4. Zotero

  • What It Does: A reference management tool that uses AI to suggest papers, organize citations, and generate bibliographies in various formats. It integrates seamlessly with word processors.
  • Free Plan: Free with unlimited reference storage; cloud syncing limited to 300 MB (expandable with paid plans).
  • Best For: Managing citations and references effortlessly.
  • Website: zotero.org

5. Scholarcy

  • What It Does: Summarizes research papers, articles, and book chapters into flashcards, highlighting key findings, limitations, and comparisons. It cuts screening time by up to 70%.
  • Free Plan: Summarize up to three documents per day; includes a browser extension for open-access and subscription-based papers.
  • Best For: Quickly digesting complex papers.
  • Website: scholarcy.com

6. ChatPDF

  • What It Does: Upload PDFs and interact with them via a chatbot to extract information or summarize content. It’s a time-saver for understanding dense research papers.
  • Free Plan: Upload two PDFs per day (up to 120 pages each) and ask 20 questions daily.
  • Best For: Extracting specific data from PDFs.
  • Website: chatpdf.com

7. Paperpal

  • What It Does: An AI writing assistant tailored for academia, offering grammar checks, paraphrasing, citation generation, and journal submission checks. It also supports literature searches and PDF analysis.
  • Free Plan: Basic grammar and style suggestions, 10 AI generations daily, and limited research features.
  • Best For: Polishing academic writing and translation.
  • Website: paperpal.com

8. NotebookLM

  • What It Does: A Google-powered tool that lets you upload up to 50 documents per notebook and generates summaries, audio overviews, or study guides. It’s perfect for organizing research materials.
  • Free Plan: Free with up to 100 notebooks, 50 sources per notebook, and daily limits on queries and audio summaries.
  • Best For: Summarizing and organizing research notes.
  • Website: notebooklm.google

9. AI2 Paperfinder

  • What It Does: Developed by the Allen Institute, this tool provides access to 8 million full-text papers and 108 million abstracts. It ranks search results by relevancy and exports citations in BibTeX or other formats.
  • Free Plan: Fully free with no limits on searches or citation exports.
  • Best For: Comprehensive literature searches and citation exports.
  • Website: paperfinder.allenai.org

10. DeepSeek

  • What It Does: A free large language model that answers research queries and synthesizes information. While not as advanced as premium models, it’s a solid option for general research assistance.
  • Free Plan: Fully free with no specific query limits (performance may vary for complex tasks).
  • Best For: General research queries on a budget.
  • Website: deepseek.com

Tips for Using AI Tools in Research

  • Verify Outputs: Tools like Elicit and ChatPDF may have errors (~90% accuracy for Elicit). Always cross-check results with original sources.
  • Combine Tools: Free plans have limitations (e.g., Scholarcy’s three-document cap). Use multiple tools to cover all research needs.
  • Maintain Integrity: AI should enhance, not replace, your critical thinking. Use these tools to boost productivity while ensuring originality.
  • Explore Paid Plans: If you hit free plan limits, consider paid upgrades for heavy use or advanced features.

Conclusion

AI tools are game-changers for academic research, helping you save time, uncover insights, and produce high-quality work. The 10 free tools listed above cover everything from literature reviews to citation management, making them accessible for students, researchers, and academics on a budget. Start exploring these tools today to streamline your research process and focus on what matters most—advancing knowledge.

Have a favorite AI research tool or need help with a specific research task? Share your thoughts in the comments below!

What Would S. R. Ranganathan Do in the Age of Generative AI if He Were Alive?

S.R. Ranganathan, the pioneering Indian librarian and mathematician, is best known for his Five Laws of Library Science and the development of the Colon Classification system. His work emphasised organising knowledge for accessibility, relevance, and user-centricity. If he were alive today, his approach to generative AI would likely be shaped by his knowledge organisation principles, focus on serving users, and innovative mindset. While it’s impossible to know exactly what he would have done, we can make informed speculations based on his philosophy and contributions.

  1. Applying the Five Laws to Generative AI
    Ranganathan’s Five Laws of Library Science (1931)—”Books are for use,” “Every reader his/her book,” “Every book its reader,” “Save the time of the reader,” and “The library is a growing organism“—could be adapted to generative AI systems, which are increasingly used to organize and generate knowledge. Here’s how he might have approached generative AI:
    Books are for use: Ranganathan would likely advocate for generative AI to be designed with practical utility in mind, ensuring it serves real-world needs, such as answering queries, generating content, or solving problems efficiently. He might push for AI interfaces that are intuitive and accessible to all users, much like a library’s catalog.
    Every reader his/her book: He would likely emphasise personalisation in AI systems, ensuring that generative AI delivers tailored responses to diverse users. For example, he might explore how AI could adapt outputs to different languages, cultural contexts, or knowledge levels, aligning with his goal of meeting individual user needs.
    Every book its reader: Ranganathan might focus on making AI-generated content discoverable and relevant, developing classification systems or metadata frameworks to organise AI outputs so users can easily find what they need. He could propose taxonomies for AI-generated text, images, or code to enhance retrieval.
    Save the time of the reader: He would likely prioritise efficiency, advocating for AI systems that provide accurate, concise, and relevant outputs quickly. He might critique models that produce verbose or irrelevant responses and push for prompt engineering techniques to streamline interactions.
    The library is a growing organism: Ranganathan would recognise generative AI as a dynamic, evolving system. He might encourage continuous updates to AI models, integrating new data and user feedback to keep them relevant, much like a library evolves with new books and technologies.
  2. Developing Classification Systems for AI Outputs
    Ranganathan’s Colon Classification system was a faceted, flexible approach to organising knowledge, allowing for complex relationships between subjects. He might apply this to generative AI by:
    Creating a taxonomy for AI-generated content: He could develop a faceted classification system to categorize outputs like text, images, or code based on attributes such as topic, format, intent, or audience. For example, a generated article could be tagged with facets like “subject: science,” “tone: formal,” or “purpose: education.”
    Improving information retrieval: Ranganathan might work on algorithms to enhance the discoverability of AI-generated content, ensuring users can navigate vast outputs efficiently. He could integrate his classification principles into AI search systems, making them more precise and context-aware.
    Addressing ethical concerns: He would likely consider the ethical implications of AI-generated content, such as misinformation or bias, and propose frameworks to tag or filter outputs for reliability and fairness, aligning with his user-centric philosophy.
  3. Advancing AI for Libraries and Knowledge Management
    As a librarian, Ranganathan would likely focus on how generative AI could enhance library services and knowledge management:
    AI-powered library assistants: He might advocate for AI chatbots to assist patrons in finding resources, answering queries, or recommending materials, saving librarians’ time and improving user experience. For example, an AI could use natural language processing to interpret complex research queries and suggest relevant books or articles.
    Automating cataloguing: Ranganathan could explore generative AI for automating metadata creation or cataloguing, using models to summarise texts, extract keywords, or classify resources according to his Colon Classification system. This would align with his goal of saving time and improving access.
    Preserving cultural knowledge: Given his work in India, he might use AI to digitise and generate summaries of regional texts, manuscripts, or oral traditions, making them accessible globally while preserving cultural context.
  4. Ethical and Social Considerations
    Ranganathan’s user-focused philosophy suggests he would be concerned with the ethical and societal impacts of generative AI, as noted in sources discussing AI’s risks like misinformation and job displacement. He might:
    Promote equitable access: He would likely advocate for open-source AI models or affordable tools to ensure generative AI benefits diverse populations, not just affluent institutions or countries.
    Address misinformation: Ranganathan might develop guidelines for libraries to educate users about AI-generated content, helping them distinguish reliable outputs from “hallucinations” or deepfakes.
    Mitigate job displacement: While recognising AI’s potential to automate tasks, he might propose training programs for librarians to adapt to AI-driven workflows, ensuring human expertise remains central.
  5. Innovating with Generative AI
    Ranganathan was an innovator, so he might experiment with generative AI to push boundaries in knowledge organisation:
    – AI for creative knowledge synthesis: He could use AI to generate new insights by synthesising existing literature, creating summaries or interdisciplinary connections that human researchers might overlook.
    AI in education: Drawing from his focus on accessibility, he might develop AI tools to generate educational content tailored to different learning styles, supporting students and educators.
    Collaborative AI systems: He might propose collaborative platforms where AI and librarians work together, with AI handling data-intensive tasks and humans providing critical judgment, aligning with his belief in human-centric systems.
  6. Critiquing and Shaping AI Development
    Ranganathan’s analytical mindset suggests he would critically examine generative AI’s limitations, such as data dependence, bias, and lack of true creativity. He might:
    Push for transparency: Advocate for clear documentation of AI training data and processes, ensuring users understand how outputs are generated.
    Enhance AI explainability: Develop frameworks to make AI decisions more interpretable, helping users trust and verify generated content.
    Focus on sustainability: Given the environmental impact of AI training, he might explore energy-efficient models or advocate for sustainable practices in AI development.

Conclusion
If S.R. Ranganathan were alive today, he would likely embrace generative AI as a tool to enhance knowledge organisation and accessibility while critically addressing its ethical and practical challenges. He would adapt his Five Laws to AI, develop classification systems for AI outputs, and leverage AI to improve library services and education. His focus would remain on serving users, ensuring equity, and advancing knowledge management in an AI-driven world. His innovative spirit and user-centric philosophy would make him a key figure in shaping generative AI’s role in libraries and beyond.

Chat with PDF files: AI Tools to Ask Questions to PDFs for Summaries and Insights

In today’s digital world, we are inundated with information, much of it locked away in PDF documents. Whether you are a student combing through research papers, a professional analysing detailed reports, or someone simply trying to extract crucial information from a large PDF, you’ve likely felt overwhelmed. But what if I told you that you could actually chat with those PDFs? Thanks to recent advancements in AI, this once far-fetched idea is now a reality.

The Power of AI in Document Analysis

AI-powered tools are transforming how we engage with PDFs, allowing us to swiftly access information, summarise content, and even query documents directly. These tools combine several cutting-edge technologies:

  1. Text Extraction: Utilising Optical Character Recognition (OCR) for scanned documents and PDF parsing libraries for digital PDFs.
  2. Natural Language Processing (NLP): AI analyses the extracted text to grasp content, structure, and context.
  3. Entity Recognition: Identifies specific entities such as names, dates, and organisations.
  4. Chat Integration: AI generates responses based on user queries and the document’s content. Top AI Tools for PDF Interaction

Let’s explore some of the leading tools in this field:

  1. ChatPDF

ChatPDF allows you to upload any PDF and ask questions about its content. Ideal for textbooks, research papers, or business documents, it quickly generates answers based on the data within the PDF. It’s also available as a plugin within ChatGPT, making it even more accessible.

  1. PDF.ai

PDF.ai specialises in multi-language PDF interaction, making it perfect for users working across different languages. It enables dynamic conversations with documents, breaking down language barriers in document analysis.

  1. GPT-PDF by Humata

Built on GPT technology, this tool offers deep interaction with complex files like reports or whitepapers. It’s particularly useful for users needing to analyse and generate insights from technical documents.

  1. Ask Your PDF

Ask Your PDF stands out with its powerful semantic search capability, excelling at analysing multiple documents simultaneously. This makes it an excellent choice for comprehensive research projects that require synthesising information from various sources.

  1. Adobe Acrobat AI Assistant

Integrated into the widely used Adobe Acrobat, this AI assistant enhances document interaction while retaining Acrobat’s traditional editing capabilities. It’s a great option for users already familiar with the Adobe ecosystem.

  1. PDFgear (Open-Source Option)

For those who prefer open-source solutions, PDFgear offers notable advantages:

  • Its open-source framework ensures transparency and customisation.
  • It supports interactions with multiple PDF files in a single session.
  • It is compatible with various AI backends like OpenAI and Anthropic.
  • Local deployment options provide greater privacy and security.
  • Available through both a web interface and command-line option. The Future of Document Interaction

These AI-powered PDF tools are just the beginning. As natural language processing and machine learning technologies continue to evolve, we can expect even more advanced document interaction capabilities. Imagine AI assistants that not only answer questions but also provide personalised insights, generate summaries tailored to your needs, or even create new documents based on the information contained within your PDFs.

Conclusion

The days of tediously scrolling through lengthy PDFs or relying solely on basic search functions are behind us. With these AI tools, we are entering an era where documents become interactive, responsive resources. Whether you’re a student, researcher, professional, or anyone who frequently works with PDFs, these tools can significantly streamline your workflow, making it easier than ever to extract and analyse information.

Have you tried any of these PDF tools? What’s been your experience? The world of AI-assisted document analysis is rapidly evolving, and it’s an exciting time to explore these new capabilities. As AI continues to push the boundaries of document interaction, the future promises even more innovative and powerful tools.

AI Tools in Education: Empowering Learning and Creativity

In recent years, artificial intelligence (AI) has made significant strides in various fields, and education is no exception. The integration of AI tools in education is revolutionising how we learn, teach, and collaborate. This blog post explores the exciting world of AI in education, focusing on different types of AI tools and their applications, as well as discussing the responsible use of this powerful technology.

Understanding Generative AI

Generative AI is a branch of artificial intelligence that focuses on creating new content such as text, images, audio, and video by learning from existing data. Unlike traditional AI, which primarily analyses and predicts outcomes based on input data, generative AI models can produce original outputs that mimic the characteristics of their training data.

This capability has led to significant interest and investment across various sectors, with tools like ChatGPT, DALL-E, and Midjourney demonstrating practical uses in text, image, audio, and video generation.

 AI Tools for Various Educational Purposes

 1. Chatbots and Text Generation

Several AI-powered chatbots and text-generation tools are available to assist students and educators:

  • ChatGPT: A versatile conversational AI for writing, coding, and tutoring.
  • Claude: Designed for various tasks with a focus on safety and ethical AI behaviour.
  • Google’s Gemini: A multimodal AI capable of understanding and generating text, images, videos, and audio.
  • Microsoft Copilot: Integrates into the Microsoft ecosystem for context-aware assistance.
  • Perplexity: An AI-powered search and answer engine.
  • Pi: An AI assistant designed for open-ended conversations and emotional support.
  • Grok: Unique AI with real-time access to X (formerly Twitter) for current events analysis.

For more specific text generation tasks, tools like HyperWrite, Smart Copy AI, Simplified AI Writer, Quillbot, and Copy.AI offer various features to improve writing efficiency and quality.

 2. Research Assistance

AI tools can significantly enhance the research process:

  • Consensus AI: Scans millions of scientific papers to find relevant ones based on your query.
  • Connected Papers and Litmaps: Visualize research areas and discover related papers.
  • Research Rabbit: Assists with literature mapping and paper recommendations.
  • Scite: Analyses and compares citations across research papers.
  • Open Knowledge Maps: Emphasizes open access content and provides research topic overviews.
  • Paper Digest: Helps in writing literature reviews by extracting essential information from papers.
  • PDFgear: Offers AI-powered PDF manipulation and information extraction.
  • Paperpal and Jenni: Provide specialized AI-powered writing assistance for academic and scientific writing.

 3. Writing Improvement

  • Grammarly: A free AI writing assistant that provides personalized suggestions to enhance your text across various platforms.
  • Trinka: Designed specifically for academic and technical writing, focusing on clarity and precision.

 4. Learning and Teaching

  • Summarize.tech: Uses AI to summarize lengthy YouTube videos, condensing hours of content into key points.
  • Quizlet: An AI-powered learning platform offering interactive flashcards, practice tests, and study activities.
  • Curipod: Helps teachers create engaging lessons with interactive activities.
  • ClassPoint: An all-in-one teaching and student engagement tool that works within PowerPoint.
  • Yippity: Converts information into various types of questions for learning and assessment.
  • Coursebox: An AI-powered platform for creating and managing online courses.
  • Goodgrade AI: Assists in writing essays, summarizing documents, and generating citations.

 5. Collaboration Tools

  • Otter.ai: Transcribes speech in real-time and offers collaboration features for document sharing and management.
  • Notion: A versatile digital workspace with AI capabilities for organizing research materials, managing projects, and facilitating collaboration.

 Responsible Use of AI in Education

While AI tools offer tremendous benefits, it is crucial to use them responsibly. Here are some key considerations:

1. Avoid Plagiarism: Always review AI-generated content carefully, rephrase ideas in your own words, and cite AI-generated content when necessary.

2. Maintain Academic Integrity: Use AI as a brainstorming tool, not a shortcut for entire projects. Be transparent about AI usage in your work.

3. Protect Privacy: Read terms of service, avoid sharing sensitive information, and use AI tools that prioritize user privacy.

4. Apply Human Oversight: AI is not always accurate and may lack context or nuance. Verify its output, especially in critical fields like law, medicine, or academia.

5. Set Boundaries: Find a balance where AI enhances your creativity but does not replace your effort. The goal is to learn and develop your own skills.

6. Follow Institutional Guidelines: Adhere to your institution’s policies on AI use to maintain integrity and trust.

 Conclusion

Generative AI is transforming education by offering powerful tools for learning, research, writing, and collaboration. By using these tools responsibly and ethically, students and educators can unlock new levels of creativity and efficiency in their academic pursuits. As AI continues to evolve, it is exciting to imagine the future possibilities in education and beyond.

Remember, while AI can be an invaluable assistant, it is your unique human perspective, critical thinking, and creativity that will truly set your work apart. Embrace AI as a tool to enhance your abilities, not replace them, and you will be well-equipped to thrive in the AI-augmented future of education.

Exploring Generative AI: ChatGPT and Its Top Alternatives

Generative AI has become a transformative force in the tech world, reshaping how we interact with technology and create content. In this blog post, we’ll dive into what Generative AI is, spotlight ChatGPT, and review some of the leading alternatives available today

What is Generative AI?

Generative AI is a specialized field within artificial intelligence dedicated to creating new content—be it text, images, audio, or video. Unlike traditional AI, which focuses primarily on analyzing existing data and making predictions, Generative AI models can produce original outputs that closely mirror the characteristics of the data they were trained on. This capability has sparked significant interest and investment across various industries, from content creation to scientific research.

Generative AI leverages sophisticated algorithms and vast datasets to generate content that is often indistinguishable from human-created work. This has led to a surge in applications, including AI-driven art, automated writing assistants, and even AI-generated music. As businesses and individuals seek innovative ways to harness these capabilities, the field continues to evolve rapidly.

ChatGPT: A Deep Dive

ChatGPT, developed by OpenAI, stands out as one of the most versatile and well-known generative AI tools. Launched initially as a conversational AI, ChatGPT excels in understanding and generating human-like text. Its applications range from writing assistance and coding support to tutoring and customer service.

Key Features of ChatGPT:

  • Versatility: Capable of handling a wide range of tasks, including text generation, problem-solving, and interactive conversation.
  • User-Friendly Interface: Designed for ease of use with a straightforward chat-based interface.
  • Regular Updates: OpenAI frequently updates ChatGPT to improve performance and expand its capabilities.
  • Free and Paid Versions: Offers both free and subscription-based models, providing various levels of access to features.

Despite its strengths, ChatGPT does have limitations. Users may encounter occasional inaccuracies, and there are ongoing concerns about data privacy and the ethical use of AI-generated content.

Top Alternatives to ChatGPT

As AI technology evolves, several competitors have emerged, offering unique features and capabilities. Here’s a look at some of the top alternatives to ChatGPT:

1. Claude by Anthropic

Claude is designed with a strong emphasis on safety and ethical AI behavior. It excels in handling complex, multi-step tasks, making it ideal for research, analysis, and creative writing. Claude’s thoughtful and nuanced responses set it apart, although it may not be as widely known or available as some of its competitors.

Key Features:

  • Safety and Ethics: Focuses on ethical AI behaviour and safety.
  • Complex Task Handling: Suitable for intricate tasks requiring detailed analysis.

2. Google’s Gemini

Google’s Gemini pushes the boundaries of AI with its multimodal capabilities, enabling it to understand and generate text, images, videos, and audio. Integrated into Google’s extensive ecosystem, Gemini is designed for advanced search, content creation, and scientific research. Its full potential is still being realized, but it offers powerful tools for diverse applications.

Key Features:

  • Multimodal Capabilities: Handles various types of media.
  • Google Integration: Leveraging Google’s resources for enhanced functionality.

3. Microsoft Copilot

Microsoft Copilot integrates seamlessly into Microsoft products such as Word, Excel, and Visual Studio, providing context-aware assistance. It simplifies complex tasks, from document creation to data analysis, within the familiar Microsoft environment. However, its benefits are mainly limited to users within the Microsoft ecosystem and may require a subscription for full access.

Key Features:

  • Context-Aware Assistance: Provides help based on the context of the task.
  • Microsoft Integration: Works within Microsoft apps and tools.

4. Perplexity

Perplexity combines web search with AI-generated insights, offering a unique blend of search engine functionality and conversational AI. It provides transparency by including sources and supports a conversational interface for follow-up questions, making it ideal for quick research and fact-checking.

Key Features:

  • Transparency: Includes sources for AI-generated insights.
  • Conversational Interface: Allows for interactive follow-up questions.

5. Pi by Inflection AI

Pi is designed for open-ended conversations and emotional support. Emphasizing personality and relatability, Pi is a great companion for personal chats, brainstorming, and general knowledge discussions. Its conversational abilities shine in creating engaging interactions, though it may not be as effective for highly technical tasks.

Key Features:

  • Emotional Support: Focuses on personality and engagement.
  • Open-Ended Conversations: Ideal for casual and brainstorming discussions.

6. Grok by xAI

Developed by Elon Musk’s xAI, Grok provides real-time access to X (formerly Twitter), offering humor and analysis on current events. While it’s great for creative problem-solving and entertaining conversations, its reliance on X for data can introduce bias, making it less suitable for some professional settings.

Key Features:

  • Real-Time Information: Access to up-to-date information from X.
  • Distinct Personality: Known for its humor and engaging style.

7. Meta AI

Meta AI encompasses a range of models and tools developed by Meta, including language, vision, and speech models. Open-source offerings like LLaMA demonstrate Meta’s versatility in natural language processing and computer vision. Despite its broad capabilities, Meta’s AI offerings can feel less cohesive and raise privacy concerns.

Key Features:

  • Versatile Models: Includes tools for various AI applications.
  • Open-Source Options: Features models like LLaMA for experimentation.

8. Poe by Quora

Poe by Quora allows users to access multiple AI models within a single chat interface. It’s designed for users to compare outputs and create custom bots, making it a playground for exploring AI capabilities. While it offers a unique platform for experimentation, its reliance on third-party models may limit its depth compared to dedicated tools.

Key Features:

  • Multi-Model Access: Compare and experiment with various AI models.
  • User-Friendly Interface: Easy to navigate and explore different AI capabilities.

Conclusion

Generative AI has moved beyond being just a buzzword to become an integral tool in our daily lives, aiding in everything from content creation to problem-solving. Whether you’re looking for an AI assistant to enhance productivity, support creative endeavours, or provide emotional support, there’s a range of tools available to suit your needs. Each AI model has its own strengths and potential drawbacks, so it’s worth exploring which one aligns best with your specific requirements.

Installing WINISIS on current 32-Bit or 64-Bit versions of Windows

Introduction:

Winisis is a software developed by UNESCO (United Nations Educational, Scientific and Cultural Organization) for managing and retrieving information stored in textual databases. It is a Windows-based version of the CDS/ISIS software, widely used in libraries, documentation centres, and similar institutions for creating and maintaining bibliographic databases.

Winisis is different from a relational database management system (RDBMS). It is based on a text-oriented database mode. It uses the CDS/ISIS (Computerized Documentation Service/Integrated Set of Information Systems) data model, which is designed to handle bibliographic and textual data rather than the structured data typically managed by relational databases. Data is stored in a format that consists of records, fields, and subfields, but it does not support the relational model’s tables, rows, and columns with defined relationships and constraints. This makes Winisis particularly suited for managing unstructured or semi-structured textual information, such as bibliographic records in libraries and documentation centres, rather than for applications requiring complex relational data handling.

Key Features of Winisis:

  1. Database Management: Allows for the creation, updating, and maintenance of textual and bibliographic databases.
  2. Data Retrieval: Provides powerful search capabilities, including boolean searches, to retrieve information efficiently.
  3. User-Friendly Interface: Designed to be easy to use with a graphical interface suitable for Windows environments.
  4. Flexible Data Entry: Supports customisable data entry worksheets tailored to the specific needs of different databases.
  5. Multilingual Support: Capable of handling multiple languages, making it suitable for international use.
  6. Import/Export Functionality: Facilitates the exchange of data with other software systems through import/export features.
  7. Customization: Allows for various levels of customization in terms of data structure, search formats, and display formats.

Legacy Software:

Unfortunately, Winisis is no longer actively supported or updated by UNESCO. The software, built for 16-bit machines, has not seen any updates for the last two decades. The lack of official updates means that it is no longer compatible with newer operating systems or technologies. Users looking for alternatives often consider other library and information management systems such as Koha, Evergreen, or other Integrated Library Systems (ILS) that are actively maintained and offer more modern features.

Continued use of Winisis:

People still continue to use Winisis for several reasons:

  1. Legacy Data: Many institutions have extensive databases in Winisis, making migration costly and complex.
  2. Familiarity: Long-term users are accustomed to Winisis, reducing retraining needs.
  3. Specific Features: Tailored features for bibliographic management make it irreplaceable for some.
  4. Cost: As a free tool provided by UNESCO, it remains a cost-effective option for resource-limited institutions.
  5. Teaching in Library Science: Winisis is still taught in some library science programs to provide historical context and foundational knowledge in database management.
  6. Low Resource Requirement: Winisis runs efficiently on older hardware and operating systems.

Installation on Modern OS:

Installing Winisis on modern operating systems can be challenging due to its outdated software architecture. Here are some methods:

  1. Compatibility Mode: Run the installation file in compatibility mode for older versions of Windows (e.g., Windows XP or Windows 7).
  2. Virtual Machines: Use a virtual machine (VM) running an older version of Windows that supports Winisis. Software like VMware or VirtualBox can help set this up.
  3. Wine on Linux/Mac: For Linux or Mac users, use Wine to run Winisis, although compatibility can vary.

These methods help ensure that Winisis can run despite its lack of updates for modern systems.

Better Installation Methods:

Here I will explain better installation methods that have been tested by myself. Depending on the machine architecture, I suggest the following two methods:

  1. NTVDM on 32-bit Windows 10.
  2. WINEVDM on 64-bit Windows.

NTVDM on 32-bit Windows 10:

This method uses the NTVDM [1] feature of Windows 10. NTVDM, or the NT Virtual DOS Machine, is a system component introduced in 1993 for all IA-32 editions of the Windows NT family (not included with 64-bit versions of the OS). This component allows the execution of 16-bit Windows applications on 32-bit Windows operating systems, as well as the execution of both 16-bit and 32-bit DOS applications. It is very similar to installing Winisis on Windows 2000, XP and NT by placing a ctl3d.dll file in the windows/system directory.

Steps:

  1. Mount Winisis CD or ISO file. In case, Winisis CD or ISO is not available, you may download Winisis installation files [2].
  2. Explore files to reach the directory containing Install.exe.
  3. Double-click on Install.exe.
  4. Windows will pop up an alert “An app on your PC needs the following Windows feature: NTVDM. Install this feature. Skip this Installation.”
  5. Select “Install this feature”. Windows will search for files and install the feature.
  6. Installation of Winisis will now proceed. Select various options for Winisis installation. It would suggest default options, which are fine. That will complete Winisis installation in a directory named “WINISIS”.
  7. Restart the system.
  8. Now explore the WINISIS directory and look for WISIS.EXE. Execute it to start up Winisis.
  9. In case you get the error – “Can’t run 16-bit Windows program …”, press OK to close WISIS.
  10. Download ctl3d.dll [3] file and place it in the Windows/System directory. Replace the existing file if any with the same name.
  11. WISIS should now work fine. Create a shortcut icon for WISIS and place it on the desktop.

WINEVDM on 64-bit Windows:

This method uses the WINEVDM [4]. Otvdm or WineVDM, is an open-source compatibility layer and user-mode emulator developed for 64-bit Windows. It consists of WineVDM, a component of Wine that serves the same role the NTVDM does on 32-bit Windows. This method has been tested to work on 64-bit versions of Windows 10 and Windows 11.  

Steps:

  1. We would require Microsoft Visual C++ Redistributable Version [5] for the x86 Architecture. It is important to note to download an X86 architecture version [6] although we are going to install it on an x64 architecture machine.  
  2. Install Microsoft Visual C++ Redistributable Version.
  3. Download the latest version of WINEVDM [4]. Extract the contents of the downloaded zip file and execute the install file.
  4. Mount Winisis CD or ISO file or download Winisis installation files [2].
  5. Explore files to reach the directory containing Install.exe.
  6. Double-click on Install.exe.
  7. Winisis will start installing and on completion there will be a directory named “WINISIS”.
  8. Now explore the WINISIS directory and look for WISIS.EXE.
  9. Create a shortcut icon for WISIS and place it on the desktop.
  10. Winisis should work fine.

REFERENCES/ LINKS:

  1. NTVDM and 16-bit app support. https://learn.microsoft.com/en-us/windows/compatibility/ntvdm-and-16-bit-app-support [Accessed – 30th July 2024].
  2. Winisis Version 1.4 Installation Files. https://drive.google.com/file/d/1erLfII8k0o5M74c–IXJ5ZahSD0RpTIT/view?usp=sharing  [Accessed 30th July 2014]
  3. CTL3D.DLL file. https://drive.google.com/file/d/1lcmAxDtr_YFq_YtWynrrDKMlcnuRgIdD/view?usp=sharing  [Accessed 30th July 2024].
  4. WINEVDM. https://github.com/otya128/winevdm/releases/tag/v0.9.0  [Accessed 30th July 2024].
  5. Microsoft Visual C++ Redistributable Version. https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist  [Accessed 30th July 2024].
  6. Microsoft Visual C++ Redistributable Version for X86 Architecture. https://aka.ms/vs/17/release/vc_redist.x86.exe  [Accessed 30th July 2024].

AI Tools for Scholarly Articles: Enhancing Research Efficiency

Introduction

Research is a vital but challenging part of academic work. It involves finding, reading, analysing, and synthesising large amounts of information from various sources. It also requires writing, editing, and proofreading papers that are clear, coherent, and convincing. These tasks can be time-consuming and tedious, leaving little room for creativity and innovation. Fortunately, artificial intelligence (AI) can help researchers overcome these challenges and enhance their research efficiency and quality. AI-powered tools can assist researchers with various aspects of their work, such as literature review, writing, editing, citation management, and more. However, there are some limitations and drawbacks of using such tools for academic articles. In this article, we will explore some of the best AI tools for scholarly articles and how they can benefit researchers.

AI Tools for Scholarly Articles

AI tools can help researchers with different stages of their research process, from finding relevant papers to writing and publishing them. Some benefits of using AI tools are:

– They can save time and effort by automating tedious and repetitive tasks, such as searching for papers, summarising them, extracting key information, and generating citations.

– They can improve the quality and accuracy of research by providing data-driven insights, feedback, and suggestions, as well as detecting and correcting errors in grammar, spelling, and style.

– They can enhance the creativity and originality of research by generating new ideas, content, and headlines, as well as finding hidden connections and patterns among research topics.

Examples of Popular AI Tools for Scholarly Articles

There are many AI tools available for scholarly articles, each with its own features and functions. Here are some examples of popular AI tools that researchers can use:

Semantic Scholar:

Academic search engine that helps researchers find relevant and trustworthy papers for their research topic. It also provides single-sentence summaries, similar paper recommendations, and citation evaluation for each paper.

Bit.ai:

Research organization tool that helps researchers store, manage, and collaborate on their online research sources. It supports various formats of content, such as blogs, articles, videos, infographics, and images.

Scholarcy:

Research summarization tool that helps researchers extract key points, figures, and references from academic articles. It also generates flashcards and outlines for each article to help researchers review and remember the main takeaways.

Scite:

Citation evaluation tool that helps researchers check the reliability and impact of citations in academic papers. It also provides smart citations that show how a paper has been supported or contradicted by other papers.

Trinka:

Research paper writing tool that helps researchers improve their grammar, style, and clarity in academic writing. It also provides feedback on the overall structure and flow of a paper.

CopyAI:

Helps researchers generate creative and engaging content for their academic papers, such as introductions, conclusions, headlines, and bullet points. It uses natural language generation to produce high-quality text based on the researcher’s input.

Rytr:

Helps researchers write faster and better by providing suggestions, templates, and feedback for their academic writing. It also allows researchers to choose from different writing styles and tones to suit their audience and purpose.

Elicit:

Helps researchers automate research workflows, such as finding relevant papers, summarizing takeaways, and extracting key information from academic articles. It uses language models to answer questions with research evidence.

HyperWrite:

Helps researchers improve their academic writing style by providing suggestions for word choice, sentence structure, and tone. It also analyses the readability and complexity of a paper.

Moonbeam:

AI writing assistant that helps users compose essays, stories, articles, blogs, and other long-form content.

Grammarly:

Popular tool for proofreading and editing academic papers. It detects and corrects errors in grammar, spelling, and punctuation, as well as provides suggestions for improving vocabulary, clarity, and tone.

Mendeley:

Helps researchers manage their citations and references for their academic papers. It integrates with PDF readers and Microsoft Word to detect citations and quickly generate a bibliography.

Zotero:

A free, easy-to-use tool to help researchers collect, organize, annotate, cite, and share research. It streamlines the citation process and supports various formats and styles.

IBM Watson Discovery:

Helps researchers analyse and extract the necessary information from scientific papers and provide an overview of the information, summarizing it in an understandable format.

ProWritingAid:

Helps researchers improve their writing skills by detecting and correcting spelling, grammar, and stylistic errors, as well as providing feedback on the readability and structure of a paper.

Paper Digest:

A tool that helps researchers summarize academic articles in a few sentences, highlighting the main points and contributions of each paper.

Consensus:

A search engine for providing Evidence-Based Answers.

Benefits of AI Tools for Scholarly Articles

When it comes to writing a scholarly article, time is of the essence. Research, analysis, and drafting can take weeks or even months. Combine that with the pressure of deadlines, and you have a recipe for stress. AI tools can help alleviate some of the load by simplifying the process and increasing productivity. One benefit of AI tools is time-saving. They can automate several tasks, such as citation management and proofreading, reducing the workload for researchers and helping them focus on creating high-quality content. Efficiency enhancement is another advantage, as AI-based writing assistance tools can suggest vocabulary and phrasing that improve the clarity and coherence of the content. Moreover, AI tools can aid in producing higher-quality research. For instance, automated literature reviews can analyse hundreds of articles and find relevant data more quickly and accurately than manual searches. All in all, AI tools can significantly reduce the time and effort researchers put into scholarly articles while improving quality. They can be a valuable addition to any writer’s toolbox.

Potential Drawbacks and Limitations of AI Tools

When it comes to AI tools for scholarly articles, there are some potential drawbacks and limitations to keep in mind. For starters, AI tools may lack accuracy or specificity in their results. While they can certainly save time and energy in the research process, they may not always be able to provide the nuance or context that humans can. Another limitation of AI tools is their ability to understand humour and sarcasm. This is a key skill in many scholarly articles, especially those in fields like literature or cultural studies. While an AI tool may be able to grasp the basics of the language, it may not truly understand the nuances of irony, satire, or other forms of humour. Over-dependence on technology is also a potential drawback of AI tools for scholarly articles. Researchers who rely too heavily on AI may miss out on the benefits of human interpretation, analysis, and critical thinking. It’s important to remember that AI tools are meant to assist researchers, not replace them entirely. Finally, another potential drawback of AI tools is their lack of interpretation and analysis. While they can certainly automate certain aspects of the research process, they may not always be able to provide the level of insight and analysis that human researchers can. Overall, while AI tools can be incredibly helpful in enhancing research efficiency, it’s important to keep these potential drawbacks and limitations in mind. By using these tools with care and consideration, researchers can help ensure that they get the most out of AI technology without sacrificing the nuance, context, and critical thinking that is so crucial to scholarly articles.

AI tools can be used to help with a variety of tasks, including research, writing, and editing. They can be valuable resources for scholars, but it is important to use them with caution and to always use your critical thinking skills. AI is yet to be mature enough to be reliable.

Factors to Consider While Choosing AI Tools for Scholarly Articles

The process of choosing the right AI tool for your research can be overwhelming. To make an informed decision, several factors need to be considered. First, the tool’s ease of use must be taken into account. No researcher wants to spend valuable time learning how to operate a new tool, which may not improve research efficiency. Therefore, an AI tool that comes with a user-friendly interface and is easy to use is key. Secondly, the tool’s integration with existing tools should be considered. Researchers prefer to use tools that work well alongside others they are already using, without any compatibility issues. Thus, it’s essential to choose an AI tool that integrates well with other tools in your research process. Customer support is another factor to consider when selecting an AI tool. Researchers require technical help and assistance, and a provider that offers quality customer support is ideal. Customization options are equally important to ensure that the tool is compatible with your specific research needs. Lastly, the accuracy and reliability of the AI tool are non-negotiable. It’s crucial that the tool’s output is precise, relevant, and produces consistent results. In conclusion, when choosing an AI tool for scholarly articles, considerations should be made in terms of ease of use, integration with existing tools, customer support, customization options, and accuracy and reliability. Failure to consider one or more of these factors may lead to tools that compromise research quality, efficiency, and, most importantly, time. Last but not least point to consider is that most of these tools would be required to be purchased for full functionality.

Conclusion

In summary, AI tools have greatly enhanced the research efficiency of scholars by providing automated literature reviews, keyword extraction and summarization tools, citation management tools, AI-based writing assistance and automated proofreading and editing tools. The benefits include time-saving, efficiency enhancement and higher-quality research. However, potential drawbacks such as lack of accuracy or specificity in results, limitations in understanding humour and sarcasm, over-dependence on technology and lack of interpretation and analysis need to be considered.

References

10 AI Tools to Make Academic Writing Smarter & Faster. (2022, December 12). Retrieved May 23, 2023, from SmartScale: https://smartscalemarketing.com/blog/academic-writing-ai-tools/

Bello, C. E. (2023, May 8). Euronews. Retrieved May 23, 2023, from The best AI tools to power your academic research: https://www.euronews.com/next/2023/05/08/best-ai-tools-academic-research-chatgpt-consensus-chatpdf-elicit-research-rabbit-scite

Eager, B. (2023, April 10). Academic Writing with AI Tools. Retrieved May 23, 2023, from Bron Eager: https://broneager.com/academic-writing-with-ai-tools

Elicit. (n.d.). Retrieved May 23, 2023, from The AI Research Assistant: https://elicit.org/

Golan, R., Reddy, R., Muthigi, A., & Ramasamy, R. (2023, February 24). Artificial intelligence in academic writing: a paradigm-shifting technological advance. Nature Reviews Urology. Retrieved May 23, 2023, from https://doi.org/10.1038/s41585-023-00746-x

Musa, Z. (2021, October 26). PublishingState.com. Retrieved May 23, 2023, from Can AI Write Academic Papers? 5 Key Things to Assess: https://publishingstate.com/can-ai-write-academic-papers-5-key-things/2021/

Portakal, E. (2023, May 16). Retrieved May 23, 2023, from Best AI tools for academic writing: https://textcortex.com/post/best-ai-tools-for-academic-writing

Tay, A. (2021, August 17). AI writing tools promise faster manuscripts for researchers. Retrieved May 23, 2023, from Nature Index: https://www.nature.com/nature-index/news-blog/artificial-intelligence-writing-tools-promise-faster-manuscripts-for-researchers

The 5 Best AI Tools for Postgraduate Research. (n.d.). Retrieved May 23, 2023, from Scholarcy: https://www.scholarcy.com/the-5-best-ai-tools-for-postgraduate-research/

Trinka. (2023, March 16). The Five Best AI Tools Every Scholar Should Be Using. Retrieved May 23, 2023, from Trinka: https://www.trinka.ai/blog/the-five-best-ai-tools-every-scholar-should-be-using/

Librarians for the AI Age

How to Embrace the Opportunities and Challenges of Artificial Intelligence

How will AI impact the future of librarianship?

Artificial Intelligence (AI) is transforming every aspect of our lives, from how we communicate, shop, work, and learn. But what does AI mean for librarians and library users? Will AI replace human librarians or enhance their services? How can librarians adapt to the changing needs and expectations of their users in the AI age?

AI is not a new concept. It has been around for about seven decades, but recent developments in the area of Generative AI are disruptive. Generative AI applications such as ChatGPT can create realistic texts, images, and sounds based on user inputs. These applications have made many people worried about losing their jobs to AI. In fact, some big corporations are already cutting their manpower. What about Librarians? Will they lose their jobs? This is a complex and controversial question that does not have a definitive answer. However, according to a survey conducted five years back (Wood & Evans, 2018), librarians are not overly concerned about the threat of AI to their jobs. They believe that AI can enhance rather than replace their services, and that they can adapt to the changing needs and expectations of their users. However, some AI experts warn that latest developments could obsolete up to 80 percent of human jobs in the next few years. Librarians should be prepared for the social and ethical implications of AI for their profession and society.

AI in Library Services.

AI can be used to automate tasks, improve efficiency, and provide new services to library users. Here are some specific examples of how AI is being used in libraries today:

Content indexing:

AI can automate the task of indexing documents, making it faster and more accurate. For example, the British Library uses AI to transcribe handwritten documents and make them searchable online.

Document matching:

AI can help users find relevant documents based on their queries or preferences.

Content summarization:

AI can generate concise summaries of long texts, which can help users decide whether to read them or not. For example, Iris.ai is a tool that can summarize scientific papers and provide key insights for researchers.

Quality of service:

AI can improve the user experience by providing personalized recommendations, feedback, and assistance. For example, the University of Michigan Library uses AI to create personalized reading recommendations for students based on their interests and preferences. New York Public Library uses AI to create virtual tours of its collections and answer user questions through chatbots.

Data analysis:

AI can help libraries make use of their machine-readable collections for research, discovery, or categorization purposes.

Knowledge management:

AI can help libraries organize, store, and integrate their knowledge resources more efficiently and effectively.

Research support:

AI can assist researchers in finding, analyzing, and synthesizing information from various sources. It can also help them with tasks such as citation management, plagiarism detection, and data visualization.

Operational efficiency:

AI can improve the library operations by automating tasks such as cataloging, circulation, inventory management, and shelf-reading. It can also help with optimizing the use of space, resources, and energy. We will see even more innovative and exciting applications in libraries in the future.

Embrace AI as an Active Leader.

AI has the potential to perform routine tasks that now require a human being, which will free up librarians to offer the in-depth expertise that is essential for advanced research. However, AI also poses some social and ethical challenges for librarians and society. For instance, AI applications that rely on extensive data collection may compromise user privacy and equity. Moreover, AI may introduce biases and errors in the information it produces or processes. Therefore, librarians should be prepared for the implications of AI for their profession and society.

Artificial Intelligence is changing the information landscape. Therefore, they should not ignore the potential impact of AI, but rather embrace it as an opportunity to learn new skills and create new value for their communities. Learn AI not as a user but as an active leader to better serve the new upcoming generations. There are different ways that librarians can learn new skills for AI. Here are a few examples:

  • Take online courses or workshops on AI and Machine Learning.
  • Read books, articles, blogs, or reports on AI and ML.
  • Participate in professional development programs or communities on AI and ML.
  • Experiment with AI and ML tools and techniques.

Need for teaching AI to the LIS students.

Students of library and information sciences (LIS) should study Artificial Intelligence. As it is rapidly changing the way libraries operate, students who are familiar with AI will be better prepared for the future. They will be able to use these technologies to make libraries more effective and efficient. They will also be able to develop new services that meet the needs of library users in the AI age.

University departments and other educational institutions offering Library and Information Sciences courses should start teaching AI, at least at the master’s degree level. Unfortunately, syllabus of LIS teaching institutions has remained the same over the years. Though there are few changes with respect to digital libraries and related technologies. Mostly the areas covered are: Introduction to Library and Information Sciences, Classification, Cataloguing, Management of Library and Information Centres, and Information Sources. Some institutions offer course on Information and Communication Technologies but that is too elementary.

Model course on Artificial Intelligence for Library Services.

To prepare students for the future libraries a model course is outlined here. The course should introduce the concepts and applications of artificial intelligence (AI) and machine learning (ML) for library services. It should cover the basics of AI and ML, such as data processing, algorithms, models, evaluation, and ethics. It should also cover as how AI and ML can be used to enhance various aspects of library services, such as collection management, information retrieval, user engagement, and knowledge organization. It must include both theoretical and practical components, with lectures, readings, assignments, and projects.

Students completing the course should be able to:

  • Explain the key concepts and principles of AI and ML
  • Identify the opportunities and challenges of using AI and ML in library services
  • Apply AI and ML tools and techniques to solve library problems
  • Evaluate the performance and impact of AI and ML solutions
  • Reflect on the ethical and social implications of AI and ML for libraries and society

 

Course outline:

1: Introduction to AI and ML

  • What is AI and ML? History, definitions, types, examples

  • How does AI and ML work? Data, algorithms, models, evaluation

  • Why use AI and ML in library services? Benefits, challenges, trends

2: Data Processing for AI and ML

  • What is data? Sources, formats, quality, preprocessing

  • How to handle data? Storage, management, analysis, visualization

  • What are the data issues? Privacy, security, bias, ethics

3: Algorithms and Models for AI and ML

  • What are algorithms and models? Concepts, categories, examples

  • How to choose algorithms and models? Criteria, comparison, selection

  • How to implement algorithms and models? Tools, frameworks, libraries

4: Information Retrieval with AI and ML

  • What is information retrieval? Concepts, processes, systems

  • How to improve information retrieval? Relevance ranking, query expansion, recommendation systems

  • How to evaluate information retrieval? Measures, methods, experiments

5: User Engagement with AI and ML

  • What is user engagement? Concepts, factors, strategies

  • How to enhance user engagement? Personalization, feedback, gamification chatbots

  • How to measure user engagement? Metrics, techniques, tools

6: Knowledge Organization with AI and ML

  • What is knowledge organization? Concepts, systems, standards

  • How to facilitate knowledge organization? Classification, clustering, extraction, linking

  • How to assess knowledge organization? Quality, usability, interoperability

7: Project Presentations

  • Students present their final projects that apply AI and ML to a library problem of their choice.

 

Conclusion.

AI is a powerful technology that can bring both opportunities and challenges for librarianship. Librarians should embrace AI as a tool that can enhance their services and skills, rather than fear it as a threat that can replace them. Librarians should also educate themselves and their users about AI and its social impacts, and help them thrive in a society that uses AI more extensively. Universities and other institutions offering LIS courses need to restructure their courses to produce librarians for the AI Age.

References

Daniel. (2021, January 11). 7 ways artificial intelligence is changing libraries. Retrieved May 10, 2023, from IRIS.AI: https://iris.ai/academics/7-ways-ai-changes-libraries/

Hays, L. (2022, February 22). Artificial Intelligence in Libraries. Retrieved May 10, 2023, from https://lucidea.com/blog/artificial-intelligence-in-libraries/

IFLA FAIFE. (2020). IFLA Statement on Libraries and Artificial Intelligence. International Federation of Library Associations and Institutions (IFLA). Retrieved May 10, 2023, from https://repository.ifla.org/handle/123456789/1646

Northwestern University. Libraries. (n.d.). Using AI Tools in Your Research: A continually-updated guide on using AI tools like ChatGPT in your research: Librarians and Faculty. Retrieved May 10, 2023, from https://libguides.northwestern.edu/ai-tools-research/librarians

Omame, I., & Alex-Nmecha, J. (2020). Artificial Intelligence in Libraries. In Managing and Adapting Library Information Services for Future Users (pp. 120-44). IGI Global. doi:10.4018/978-1-7998-1116-9.ch008

Wood, B. A., & Evans, D. J. (2018, Jan – Feb). Librarians’ Perceptions of Artificial Intelligence and Its Potential Impact on the Profession. Computers in Libraries, 38(1), 26. Retrieved May 10, 2023, from https://www.infotoday.com/cilmag/jan18/Wood-Evans–Librarians-Perceptions-of-Artificial-Intelligence.shtml

 

Goodbye NIC, Hello World!

On 31st March 2023, I retired from National Informatics Centre on superannuation.

It had been wonderful journey of my life with National Informatics Centre (NIC). This enjoyable journey has been completed in 31 Years, 5 Months and 1 Day. It was on 30th October 1991 when I joined NIC as Scientific Officer/Engineer-SB. Before that, I was employed at Indira Gandhi National Open University (IGNOU). On joining NIC, I was posted in its Bibliographic Informatics Division. However the division was popularly known as Indian MEDLARS Centre or simply MEDLARS. In those wonderful days, it was one of the most prestigious and popular Divisions of NIC. Popular to the extent that some people even use to ask – what else NIC does other than MEDLARS?! No wonder, if it was showcased to all VVIPs visitors.

By the way, MEDLARS was not something that NIC created. It actually stood for Medical Literature Analysis and Retrieval System of US National Library of Medicine (NLM) that originated in 1964. It is core to Medical and Biomedical Research and no research can practically be initiated or completed without searching it. In late 1980s, NIC and Indian Council of Medical Research (ICMR) teamed up to provide information from MEDLARS to Doctors and Biomedical Researchers in India. Thus, the Indian Medlars Centre was born in NIC. Information was retrieved online through ISD lines using dial-up Modems from US National Library of Medicine (NLM). It was costly that way – database access was charged by NLM per seconds in dollars plus there was ISD phone charges. So, special skills were required to retrieve the most “relevant” information within the shortest time frame in a cost effective manner. Just remember, it was pre-Internet and pre-Google era. Planned and written “Search Strategies” consisting of MeSH (Medical Subject Headings from NLM Thesaurus) keywords connected with Boolean Operators were required before reaching out to the access terminals.   No wonder, few Information Specialists with biomedical background like me were recruited by NIC to be part of its MEDLARS team. To provide affordable and country wide access to the MEDLINE Database (online counterpart of MEDLARS), a MoU was signed with NLM and the database was acquired from NLM, US. It was hosted on a Unix server in the Division and connected to NICNET. Data use to come on Tapes from US and it took days to convert and upload to the server. Medical Institutions across the country used to dial-up to nearest NIC District Centres to access the server through NICNET.  MEDLARS team was hosting and updating database along with providing paid information retrieval services, connectivity and training to the end users and institutions.

As “change is the only constant in life”. Internet technologies emerged and the web became popular. The internet became available to Indian public on 15th August 1995. NLM made a web avatar of its chargeable MEDLINE and named it PubMed. After that, in June 1997, made it available free of charge on the web. So, our hosted database on NICNET was bound to have its natural death. Slowly our paid information retrieval services also appeared to be meaningless as end users with proper training could access PubMed freely without any time constraint.

The Human is supreme in animal kingdom because it has the ability to adapt to the environment and situations. I was also changing and adapting to the emerging technologies. When I joined NIC, I had academic qualification of M.Sc. in Anthropology and professional qualification of M.L.I.S (Master of Library and Information Science). I studied during service and completed M.S. (Software Systems) from BITS Pilani in 2001 with outstanding grade.  For my M.S. dissertation, I wrote a text search engine.  It was used to sow the seeds of a National Database named IndMED by indexing Indian medical research journals on the lines of PubMed adopting international standards. To supplement IndMED, we convinced medical journals to host their full text article for free access on a single platform – MedIND. Experimented with Open Access Repository of Medical Research Articles in the form of OpenMED@NIC. Individual authors could upload their articles and tag them with MeSH keywords. This experiment latter laid foundation of a new Digital Archiving Division latter using DSpace.  These initiatives were well taken by the medical community both in India and abroad.

Good time flies and once a darling, Indian Medlars Centre was no longer relevant in NIC. Come March 2009, it was formally shutdown. However, IndMED and MedIND continued with the support of ICMR funds under my leadership.

I had been actively involved with the medical community going up to the extent of becoming life member of Indian Association for Medical Informatics (IAMI). I was elected as Executive Editor (2007-10) of its official journal – Indian Journal of Medical Informatics. I had also been the Executive Member of the Indian Association of Medical Informatics.

Since January 2017 till retirement I headed Programme Management and Parliamentary Matters Section. It dealt with Parliamentary matters related to NIC like Questions, Assurances and Parliamentary Committees. Monitored progress of NIC Projects/ Services and provided reports, information and presentations to higher authorities. I won’t be wrong if I put the section as an extended DG Office.

Enjoyed my stay at NIC. Got promotions. Login was as Scientific Officer/Engineer-SB and Logout is as Scientist – G / Deputy Director General. Would miss wonderful colleagues and environment.

It’s time to say Goodbye NIC – but I think goodbyes are sad and I’d rather say hello.

Hello World!!!