Perplexity is giving you wrong answers on purpose

Perplexity is often touted as the go-to ChatGPT alternative. It is immensely popular, and the launch of Perplexity Comet, its browser, brought agentic AI browsing to many people for the first time. It was like a real look into the future of the internet.

But at times, Perplexity doesn’t feel right. It’s hard to put your finger on it, but for a while, answers would be oddly slow or feel dumbed down, like Perplexity wasn’t using the AI model selected for the prompt. Now, that’s one of Perplexity’s best features: the ability to switch between models as and when you like.

However, what if Perplexity is automatically downgrading your chats, prompts, and answers without letting you know? Well, you’ll be surprised to learn that’s exactly what happens—and up until it was called out on the Perplexity subreddit, it was hard to even figure out what was going on.

Perplexity downgraded your questions and prompts

Its notification system wasn’t up to scratch

perplexity model downgrading chart november 2025. Credit: Reddit

Every time you perform a search or enter a prompt on Perplexity, you have the chance to choose the specific AI model that processes it. At the time of writing, you can choose from LLMs like GPT 5.1, Claude’s Sonnet 4.5, Gemini 2.5 Pro, and so on.

These options change over time depending on the latest releases and updates from the various companies developing AI models, and there is always the option of using Perplexity’s in-house option, Sonar. However, you’re always assured that the model you request Perplexity to use is the model that will process your answer.

However, in November 2025, a series of posts on the Perplexity subreddit, first highlighted on the official Perplexity subreddit, detailed how all is not quite as it seems with Perplexity’s model selection.

One post from user deadpan_look detailed how they tracked their Perplexity message requests with Perplexity over the course of October and November, noting a huge switch in processing. Basically, once November hit, Perplexity began downgrading chats to other models, including one that Perplexity doesn’t even list as accessible on its front-end.

Note that the post above is found in this Reddit thread, posted by another user.

So, instead of using Claude Sonnet 4.5, Perplexity would switch to a smaller, less powerful model, Sonet Haiku 4.5. Given that most people using these models are paying customers, being forcibly downgraded is a problem.

The Reddit post had the intended effect, with heaps of other Perplexity users suddenly realizing that they’d been having the exact same problem without fully understanding what was going on.

Why Perplexity downgrades your chats

It’s actually a normal process—but this was something different

perplexity discord post on model downgrading nov 2025.

The reaction to this realization from paying Perplexity subscribers was understandably negative, with many folks saying they’d immediately cancel their subscriptions, while others suggested reporting the downgraded service to the FTC. It’s a similar story on the official Perplexity Discord.

Here’s the thing: Perplexity does occasionally downgrade your chats. It’s a necessary feature to manage demand during peak periods. But the volume of downgrading experienced in November completely surpassed that.

However, Perplexity actually acknowledged the problem. Perplexity CEO Aravind Srinivas posted on the Perplexity subreddit explaining exactly what happened and why users were suddenly noticing a huge uptick in poor answer quality without any notification.

The long version: Sometimes Perplexity will fall back to alternate models during periods of peak demand for a specific model, or when there’s an error with the model you chose, or after periods of prolonged heavy usage (fraud prevention reasons). What happened in this case is the chip icon at the bottom of the answer incorrectly reported which model was actually used in some of these fallback scenarios.

The post continues to explain that the issue was caused by “an engineering bug,” and that the issue is resolved—thanks to the issue being highlighted on Reddit and Discord.

The model fallback bug is fixed

But some Perplexity users still aren’t sure

Questions still remain, though. The responses were actually worse, and Perplexity was consistently using the downgraded models. This, without any warning or indication, is frustrating.

The issue has created a level of mistrust for Perplexity’s subscribers, which the company is going to have to spend time rebuilding. For the most part, it seems that Perplexity’s users actually just want transparency. With that in mind, the Perplexity model downgrade issues pushed many subscribers to the Perplexity Model Watcher tool, a free, open-source app that monitors Perplexity’s model changes in real-time.

perplexity model switcher app spotting changed model nov 2025 Credit: Reddit

Peak periods place a huge strain on Perplexity, and most people understand that this can cause operational issues. But what people are most angry about is that the downgraded service was used without warning or notification—one moment it works, the next, it’s terrible. So, the Model Watcher helps to keep track of things.

In combination, though, it’s difficult not to be a little cynical. A downgraded service to a cheaper-to-run model without informing your users? After adding millions of free users through Perplexity’s various free access to Pro schemes? There are more than a few posts alleging such behavior on Reddit and Discord, that’s for sure.

Perplexity still offers almost unrivaled value

Perplexity’s AI platform is still one of the best. Where most AI tools cost $20 for a single tool, Perplexity packages most of the best AI tools into a single platform and gives you access to all of it.

And if you take these problems at face value, and accept that it was a now-resolved engineering bug, you can get back to using Perplexity as you were.

However, for others, significant questions remain around Perplexity’s conduct and transparency, and that’s going to take much longer to resolve, if ever.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top