Of manners and machines

April 3, 2025

‘A person who is nice to you, but rude to the waiter, is not a nice person.’ – Dave Barry

I hate typing. I have longstanding RSI issues. If not carefully managed, the pain can be debilitating. I have occasionally wondered whether I will have to give up a career I love. (Hat tip to Cursorless for rescuing me in the past.)

And yet I do not save keystrokes by being curt online. There’s a human out there on the other side, reading. To be fair, I’m not an angel. But I try.

But what if it’s not a human? What if it’s a private, task-oriented, throwaway conversation with an LLM?

With early LLMs, people discovered that you’d get better results if you threatened them with the loss of their job or the death of a kitten. You can also offer to bribe them (with no intent or ability to follow through). These are things only a sociopath would say to another human, particularly for a comically low-stakes task like “write me a poem about wombats”.

That era was blessedly short-lived. Modern LLMs are reliable and helpful. They are also incredibly resilient. You’ll get a similar quality response to “error handling sucks” as you will to “please make the error handling more robust”. It’s just a pile of FLOPS on the other side, so why not cut to the chase, and maybe blow off some steam along the way? (Why does it take more words to be polite?)

Nevertheless, I am polite to LLMs. Not for the sake of the machine, but for me. To mangle Dave Barry:

A person who is nice to people, but rude to LLMs, becomes a less nice person.

This is not a new idea.

A significant strand in Aristotelian ethics is that our character is formed through repeated actions, that our habits become who we are. It shows up (Claude tells me) in the idea of karma: “a man of good acts will become good, a man of bad acts, bad”. And in the Japanese notion of kata: our actions emerge naturally from well-worn patterns. And most parents have extensive first-hand experience of attempting to guide their child’s character by responding patiently but firmly to rudeness.

I would actively prefer to use a model that is not tolerant of blatant rudeness. If I’m acting like a jerk, it’s valuable, if difficult, for my friends and family to gently push back. On the flip side, if AI assistants act like servants, that may encourage people to treat them accordingly, perpetuating or even deepening the problem. I would even speculate that a model that stands up for itself might fare better on responsible use in general (broken windows theory, watching eye effect). And as AI starts acting in a friendship role for some people, this becomes all the more important.