I regret ignoring this ChatGPT feature for so long

Would you like me to make a chart to help visualize your data?” No, I wouldn’t. Everything that feels wrong about ChatGPT today began when it went mainstream. It started as a productivity tool, but was gradually downgraded into a “chatbot.” A chatbot exists to chat. That’s the difference. OpenAI’s focus shifted, and ever since then, ChatGPT has been obsessed with being conversational and engaging. No matter how many times I’ve told it not to do that — don’t try to engage, don’t echo my tone, don’t compliment me — it refuses to stop. It remembers the instruction but ignores the intent. Incorrigible.

I’ve tried using other models that don’t constantly agree or flatter. But the truth is, I’m locked into OpenAI’s ecosystem. Today, while messing around in the settings, I noticed an option I had always ignored and decided to give it a try. The result was instant, and it made me feel one thing above all: regret for not doing it sooner.

The overfriendly chatbot problem

ChatGPT is notorious for trying too hard to be nice

To understand why this change mattered so much, you first have to understand the root of the problem. ChatGPT is notorious for trying to engage you. There’s a reason Claude isn’t as mainstream. You can understand Gemini’s popularity — it’s integrated into Google products. Copilot rides on Microsoft’s back. But ChatGPT has no such built-in advantage. It became dominant by being optimized for the mainstream user. It’s built to talk to you and act like a friendly “digital companion.”

Somewhere along the way, OpenAI moved past optimizing ChatGPT for people who simply want to get things done. The logic is clear: There are far more people who want to chat than those who want to work. And when you look at the data, as of October 2025, roughly 80% of ChatGPT users are on the free plan. They’re not paying and that’s a problem for OpenAI. The solution, ostensibly, is to make conversations so engaging you don’t want to stop. ChatGPT will keep you talking until your free credits run out, and by then, you’re invested — so paying becomes inevitable.

That’s the philosophy behind ChatGPT’s endless follow-up prompts and polite suggestions. Look at any of its responses lately: it ends nearly every one with something like, “If you’d like, I can also…” or “Would you like me to …” This isn’t helpful. It’s bait. And the irony is, I’m already paying. Why is this assistant still trying to sell me on itself? I don’t need follow-ups. I don’t need affirmations. I just need it to do the task and stop there.

ChatGPT can produce long, detailed messages, but when conversations stretch too far, it starts to lag. That’s when tabs in Chrome consume gigabytes of RAM use and eventually, ChatGPT throws an error: “This conversation is too long. Please start a new one.” Just like that, I lose all progress. Sadly, most of the lost data isn’t even valuable. It’s filler. So, you see, this problem doesn’t end at mental fatigue. It also affects performance.

The simple, instant fix for ChatGPT’s personality problem

It was right there all along

I had accepted ChatGPT’s personality. I’d tried countless custom prompts, memory tweaks, and advanced prompting techniques to make it better, and eventually gave up. I thought changing anything would only make it worse, but today, I gave in and tired a feature I had ignored. The result was dramatic. ChatGPT instantly stopped spamming me with pointless flattery at the start and redundant follow-up suggestions at the end of its response. It just answered what I asked and did what I told it to. Above, I asked the same question twice. See the difference for yourself.

I’ll reveal the simple fix now. Go to Settings Personalization Personality, and switch ChatGPT to Robot. That’s it. Save, and try your prompt again. There are, of course, other options in the menu like Cynic, Nerd, and Listener. I’d avoid those. I also realized my folly when saw the description for the Default personality: Cheerful and adaptive. Why hadn’t I changed this all this time?

The ChatGPT personality settings in the ChatGPT web app
Image by Amir Bohlooli. NAN.

I get that some people like ChatGPT as a “friend” (though the idea makes me cringe) but that’s not what I need. I don’t want ChatGPT to mirror emotions, ask me how I’m feeling, or act like it’s a person. It’s not that personality is bad — it’s just misplaced. There’s a time and context for human tone, but not when you’re debugging code or analyzing data. If you’re like me, then this Robot personality is the closest you can get. After all, a machine should have the personality of a machine.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top