Research Bridge
research_bridge_logo
A Snapshot of GenAI Tools for Research
AI in Research & Learning
Research Tools

The landscape of AI-powered research tools is booming, and keeping track of new developments can feel overwhelming. The term “AI-powered”, and increasingly, “agentic AI” is used everywhere, but the actual role the AI plays varies wildly. For example, a large language model (LLM) that simply translates your natural language query into a Boolean search string is marketed as “AI-powered”, just like an LLM that reads across dozens of papers to synthesize a comprehensive report. Similarly, a rigid, predefined question-and-answer tool might be branded as “agentic”, just like a truly autonomous system that iteratively searches databases, evaluates sources, and refines its own queries. Many tools combine several of these functions into a single workflow under the same broad label, and their product descriptions do not always clearly explain how the system actually works.

To help cut through the noise, we built a simple, interactive app for researchers. This tool helps navigate the GenAI landscape by comparing features, checking access levels (e.g. free or HKUST-subscribed), and exploring platforms from different angles. The goal is to help researchers “look under the hood” of these tools to truly understand how they work and what they can do for research.


Quick Walkthrough of the App

The app currently catalogs over 40 tools and platforms. It compares their key features, limitations, data coverage (the data the tool is trained on or searches across), and the specific roles AI plays within them. It also highlights similar tools and provides quick references to privacy policies and access links. Each tool is represented as a clickable icon — simply click on one to view its details, or use the left panel to search and filter based on your specific research needs. 

  • Timeline View illustrates the evolution of GenAI tools, starting from the traditional and open databases that continue to play important roles in later AI-enabled workflows. Tools are grouped by their core nature (e.g. General LLMs vs. RAG-based scholarly databases vs. Deep Research tools) and mapped against the four quadrants of AI search tools framework (developed by Aaron Tay from SMU Library). This helps us understand how research tools have developed over time, how different categories relate to one another, and where newer tools fit within the broader research ecosystem.
  • Table View presents all the tools in a tabular format, with expandable rows for more detailed information. You can search for a specific tool or select multiple ones for a side-by-side comparison. This view helps quickly evaluate and contrast features, limitations, and access models at a single glance.
  • AI Roles View breaks down the specific functions AI performs before, during, and after the “search” process, with example tools mapped to each role. This view helps us look at platforms from a functional angle, answering questions like: “Which tools are actually performing semantic/embedding searches?” “Which ones are using Retrieval-Augmented Generation (RAG) to synthesize reports?”

 

Full view on Github

Note: The app was vibe-coded using Cursor, with initial support from the latest GPT, Claude, and Gemini models. All content was curated and validated by humans. A key source of inspiration and reference for this project are the blog articles by Aaron (thanks!!). As the landscape moves fast, errors, misinterpretations, or outdated information may exist, so your feedback is highly welcome.

 

3 Paradigm Shifts

Looking at the Timeline View, we can identify three major paradigm shifts in how these tools have evolved and specialized over time:

 

Shift 1 — Scholarly Databases: From “Quick Search” to “Quick RAG”

For decades, traditional scholarly databases like Web of Science and Scopus operated on a “Quick Search” model: you typed in keywords or Boolean operators, and the system returned a reproducible, ranked list of results.

When AI-native startups demonstrated that a research query could return a synthesized, cited answer using Retrieval-Augmented Generation (RAG) rather than just a list of links, established database vendors had to adapt. One by one, they began layering natural language semantic search and RAG features on top of their existing infrastructure. In this model, the system reads the user's query and retrieves relevant data from its own trusted corpus, sends both to a language model, and generates an answer strictly grounded in that licensed content.

Today, tools like the Web of Science Research Assistant and Scopus AI (now part of Elsevier’s LeapSpace) operate primarily at the abstract and metadata level, while ScienceDirect AI goes further by grounding responses in full-text content. This shift isn't limited to general literature discovery; subject-specific databases, such as O’Reilly (for tech books and videos), Factiva (for news and business information), Statista (for statistics and market reports), Patsnap (for patents), and SciFinder (for chemistry information), are all applying RAG to answer complex, domain-specific questions.

 

Shift 2 — AI-Native Research Assistants: From “Quick RAG” to “Deep Research”

AI-native research tools like Elicit, SciSpace, Consensus, and Scite were born as “Quick RAG” systems. Mostly trained on open academic databases like Semantic Scholar or OpenAlex, they allowed users to ask a question and receive a short, synthesized answer with inline citations. While the citations were real, the answers were initially brief and based only on the top 10-20 retrieved papers.

Over time, these tools evolved in two major directions. First, they went deeper into individual papers, moving from short summaries to extracting specific fields such as study designs and sample sizes into structured tables (a feature pioneered by Elicit and quickly adopted by others).

Then came the shift to Deep Research. Instead of running a single search, tools like Undermind now run multiple iterative searches, blend keyword and semantic retrieval with citation chasing, and produce comprehensive reports. Tools like Elicit (via its Systematic Review mode), and Consensus (via Deep Search) — both Pro-tier features — can now search through 1,000 papers and analyze the top 50 results using full-text integration. Both follow PRISMA-style pipelines and are designed to assist in systematic review and evidence synthesis tasks.

However, as Aaron points out, it is crucial to note that most academic Deep Research tools are actually “workflow-bound”. They execute highly sophisticated, pre-designed pipelines rather than reasoning from scratch. This is a design choice: fixed workflows are dramatically faster, more reliable, and easier to audit. But it also means they are not truly autonomous or “agentic”; they excel at the specific tasks they were built for, but they often struggle or fail when asked to handle tasks outside their designed workflows.

 

Shift 3 — General LLMs: From “No Search” to “Deep Research” and “MCP Integration”

General-purpose LLMs (e.g. ChatGPT, Claude, and Gemini) and AI aggregators (e.g. Poe, which hosts state-of-the-art models from various companies) did not start as research tools. In their earliest forms, they had no internet access; every answer was generated from static training data with a strict knowledge cut-off, making them prone to hallucination and useless for rigorous academic citation.

Perplexity was the first to build search grounding into its core product, ensuring answers were retrieved from and attributed to web sources. Soon, major LLMs followed, first offering web search as an optional toggle and now many making it a default feature.

OpenAI released Deep Research in early 2025 (initially priced at USD200 per month), shortly after the world was shocked by the emergence of DeepSeek. Perplexity, Gemini, and Grok soon followed with their own “Deep Research” modes, adding iterative web search and long-form report generation with inline citations. This brought general-purpose LLMs into the research tools landscape, especially for projects that extend beyond academic sources such as journals and books. Interestingly, while they are not exclusively trained on academic knowledge, general LLMs are arguably more agentic than specialized AI research tools. Because they are trained to plan, choose tools dynamically, and adapt on the fly, they can reason through novel research tasks from scratch.

The current frontier — and the one libraries may find most interesting — is MCP (Model Context Protocol) integration. MCP provides a bridge, allowing these highly flexible, reasoning LLMs to connect directly to paywalled academic databases and specialized tools. For example, Claude and ChatGPT can now connect to PubMed, Scite, Consensus, and Wiley via MCPs. Researchers can create specific “skills” (e.g. for literature reviews or grant identification, provided by Consensus) and let the LLM decide when to trigger which database. We are also seeing more university libraries (such as those at Yale, Northeastern University, and UT Austin) using MCPs to connect general LLMs directly to their own library discovery systems, blending the flexible reasoning of general AI with the trusted, subscribed content of the academic library.

 

Looking Ahead

As AI tools continue to rapidly evolve, the lines between “search”, “synthesize”, and “reasoning” are becoming increasingly blurred. Whether you prefer the structured, workflow-bound reliability of a specialized research assistant, or the flexible, agentic reasoning of a general LLM connected via MCP, there has never been a better time to experiment and find what fits your needs. We hope this interactive app helps you navigate these choices, look under the hood of emerging platforms, and ultimately enhance your research workflow. 

 

Disclaimer: This post was edited with the help of Gemini Pro 3.1, GPT-5.4, and Notion AI. 

Edited By
Aster Zhao, Library, lbaster@ust.hk
Published
10 Apr 2026
Supporting:
4
Quality Education
Previous News
Previous News
Next News
Previous News
Next News
Previous News
Next News
Next News
Next News
Previous News
Previous News
Next News
Previous News
Next News
Previous News
Next News
Previous News
Previous News
Research Bridge
Next News
Research Bridge
Previous News
Research Bridge
Next News
Previous News
Next News
Research Bridge
Previous News
Next News
Previous News
Previous News
Library Stories
Next News
Next News
Library Stories
Next News
Library Stories
Next News
Library Stories
Previous News
Next News
Previous News
Next News
Previous News
Next News
Library Stories
Previous News
Library Stories
Next News
Library Stories
Previous News
Research Bridge
Previous News
Next News
Previous News
Research Bridge
Previous News
Previous News
Next News
Previous News
Research Bridge
Previous News
Previous News
Next News
Previous News
Next News
Previous News
Next News
Library Stories
Next News
Next News
Library Stories
Previous News
Library Stories
Previous News
Next News
Previous News
Research Bridge
Next News
Previous News
Previous News
Next News
Research Bridge
Next News
Previous News
Previous News
Next News
Previous News
Library Stories
Next News
Previous News
Next News
Previous News
Next News
Library Stories
Previous News
Next News
Library Stories
Previous News
Research Bridge