Hallucinations, or the generation of inaccurate or fabricated information, are a significant concern surrounding the use of AI. In this post, we introduce five AI research tools that can generate answers based on real sources. These tools are designed to ensure that students and researchers can rely on accurate and reliable information when citing sources.
Limitations of GPT3.5/GPT4 in Referencing Evidence
GPT3.5 and GPT4 lack the ability to reference evidence. Although they provide seemingly authentic and confident responses, the accuracy of their arguments and assertions is highly questionable.
A recent study examined the performance of GPT3.5 and GPT4 by asking them 50 questions from diverse domains and requesting the top five sources or citations for each answer. The study revealed that 72.5% of the citations provided by GPT3.5 were fictional, and for GPT4, the figure was 71.2% – Quite an alarming result!
- Scite.ai offers a conversational AI Assistant that provides answers referencing real papers with real DOIs.
- Sentences of the referenced part are highlighted from their full-text so we can see where and why they are relevant.
- Users have control over the Assistant’s setting, such as specifying reference sources (from certain journals or Scite dashboards), reference year range, and the number of references to include in an answer.
- Similar to Scite Assistant, SciSpace allows users to refine results based on publication type, year, or towards a specific PDF.
- The SciSpace Copilot Chrome extension is a convenient tool for answering questions and explaining concepts in a paper while browsing the web.
- The beta version of Elicit.org supports more research workflows and offers more robust results.
- Users can find papers related to their research question, extract information from PDFs, and discover concepts across papers.
- Elicit provides structured tables summarizing relevant papers and allows users to customize and add additional details to the table.
Elicit answers questions based on top relevant papers, presenting them in a structured table. Users can add more columns to the table and indicate key papers. Afterwards, the generated summary may be re-computed accordingly.
- Petal is a document analysis platform that goes beyond analyzing a single PDF.
- It can answer questions and facilitate multi-PDF chat, allowing users to generate structured AI tables summarizing multiple PDFs.
- Petal offers flexibility in customizing the columns of the AI table and allows users to edit cell information if the AI generates inaccurate results.
More flexibility in generating AI tables for documents added to the library. Users can edit columns and cell information to correct any inaccuracies.
- Consensus positions itself as an academic search engine rather than a chatbot.
- Users can ask research questions, and Consensus synthesizes results from top relevant research papers.
This post focuses on highlighting the tools’ capabilities and is not an exhaustive technical review. It is important to refer to the tools’ respective documentation for detailed information about the underlying language models used, data sources, privacy policies, and other specifications.
Finally, let’s conclude this post with a table of key features of these tools:
|Scite Assistant||SciSpace||Elicit Beta||Petal||Consensus|
|Can accept full text PDF upload?|
|Can summarize papers in structured table (lit review matrix)?|
|Can refine/customize reference sources?|
|Can ask chatbot specific questions on multiple papers?|
|FAQ / About / Support|
|FAQ||FAQ||FAQ (to elicit)||Documentation FAQ||How it Works|
– By Jennifer Gu, Library
Hits: 1081Go Back to page Top
published August 31, 2023