Support chatbot conversation used to illustrate escalation to a human agent

These support bots drive me crazy.

Stanislav Kapustin Apr 17, 2026 customer support · automation · ai agents · systems thinking · chatbots

These support bots drive me crazy.

But actually — not really.

I know one simple trick I use like a skeleton key.

When companies automate support, your first contact is almost always a bot. It tries to answer your question, runs you through scripted flows, and only after a certain signal does it hand you over to a real person.

In well-designed systems, one of those signals is emotion. There’s built-in emotion detection: if the user starts showing frustration or anger, the chat is automatically escalated to a human agent.

And that’s where the weak spot is.

You calmly write to the bot:

“I’m frustrated.” — and then add your actual question.

The next message connects you to a human.

Then you just tell the agent: “I’m not actually upset — I just knew the bot couldn’t handle my question.” And that’s it, you continue normally.

It works. I contact support quite often, and my questions are often more complex than what their automation can handle.

So yes, I admit — I use this trick instead of going in circles.

I don’t know how to fix this properly at the system level right now. You can’t fix it with a simple phrase filter, and you can’t fix it with context alone. From text, you can’t reliably tell whether the emotion is real.

It’s a structural limitation.

But here’s what I do know: when you build a system, you need to think like someone who will try to bypass it. You need to know the weak points before others find them.

Take it as a tip 🙂

And if you know how to close this loophole — I’m curious to hear your ideas.

Read next

Three nearby posts worth opening next.

Need a similar system in your business?

If you have a manual workflow between tools, I can help map the logic, design the system, and automate it in a way your team can actually use.

svg