Hallucinations in Watermarked LLMs

Investigating whether watermarked LLMs are more likely to “hallucinate.” My MIT Deep Learning final project.