Investigating whether watermarked LLMs are more likely to “hallucinate.” My MIT Deep Learning final project.
Blog post
Github