HDLCoRe: A Training-Free Framework for Mitigating Hallucinations in LLM-Generated HDL
Published in IEEE LAD 2025, 2025
HDLCoRe addresses hallucinations in large language model–generated hardware description language (HDL) code without requiring model fine-tuning. It combines an HDL-aware Chain-of-Thought prompting with self-verification and a two-stage heterogeneous retrieval-augmented generation (RAG) system for formatting consistency and example retrieval. Evaluated on the RTLLM2.0 benchmark, HDLCoRe significantly reduces hallucination and improves both syntactic and functional correctness.