LLM-first search cites sources but hallucinates anyway

I recently stopped using Google search because of what I affectionately (and exasperatingly) call GenAI slop.

Not being able to disable the LLM generated responses at the top of search results finally became a deal breaker for me.

In relation to this, I won’t be using ChatGPT search, recently launched by OpenAI.

Why?

Simply put, LLM-first search is very good at hallucinating while citing sources that contradict those hallucinations.

To be clear, this is it working as designed, fundamentally getting in the way of accurate information far too often.

Never assume that LLMs are pulling content directly from the sources they link (they often aren’t).

As I see it, LLM-first search is currently a broken use case, a step backwards, a potential spreader of misinformation.

Treat these kind of tools with a large dose of healthy skepticism.

This article from the MIT Technology Review is also worth a read: OpenAI brings a new web search tool to ChatGPT

Oh, and I switched from Google to DuckDuckGo in case you were interested 🙂