bib-verify

Verify a BibTeX file for hallucinated or fabricated references by cross-checking every entry against CrossRef, arXiv, and DBLP. Reports each reference as verified, suspect, or not found, with field-level mismatch details (title, authors, year, DOI). Use when the user wants to check a .bib file for fake citations, validate references in a paper, or audit bibliography entries for accuracy.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "bib-verify" with this command: npx skills add agentscope-ai/openjudge/agentscope-ai-openjudge-bib-verify

BibTeX Verification Skill

Check every entry in a .bib file against real academic databases using the OpenJudge PaperReviewPipeline in BibTeX-only mode:

  1. Parse — extract all entries from the .bib file
  2. Lookup — query CrossRef, arXiv, and DBLP for each reference
  3. Match — compare title, authors, year, and DOI
  4. Report — flag each entry as verified, suspect, or not_found

Prerequisites

pip install py-openjudge litellm

Gather from user before running

InfoRequired?Notes
BibTeX file pathYes.bib file to verify
CrossRef emailNoImproves CrossRef API rate limits

Quick start

# Verify a standalone .bib file
python -m cookbooks.paper_review --bib_only references.bib

# With CrossRef email for better rate limits
python -m cookbooks.paper_review --bib_only references.bib --email your@email.com

# Save report to a custom path
python -m cookbooks.paper_review --bib_only references.bib \
  --email your@email.com --output bib_report.md

Relevant options

FlagDefaultDescription
--bib_onlyPath to .bib file (required for standalone verification)
--emailCrossRef mailto — improves rate limits, recommended
--outputautoOutput .md report path
--languageenReport language: en or zh

Interpreting results

Each reference entry is assigned one of three statuses:

StatusMeaning
verifiedFound in CrossRef / arXiv / DBLP with matching fields
suspectTitle or authors do not match any real paper — likely hallucinated or mis-cited
not_foundNo match in any database — treat as fabricated

Field-level details are shown for suspect entries:

  • title_match — whether the title matches a real paper
  • author_match — whether the author list matches
  • year_match — whether the publication year is correct
  • doi_match — whether the DOI resolves to the right paper

Additional resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

auto-arena

No summary provided by upstream source.

Repository SourceNeeds Review
Research

paper-review

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

find-skills-combo

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ref-hallucination-arena

No summary provided by upstream source.

Repository SourceNeeds Review