It started with something small.
I caught myself using "please" and "thank you" in my interactions with the AI tools that our team at BehaviorSMART was starting to use. At first, it felt slightly irrational but also interesting to acknowledge. Then I started seeing posts and articles discussing how people tend to be polite to chatbots—and how little logic there is in that. Why would you thank a system that does not feel, perceive, or care?
Thinking about my thank you's, I realized: this is exactly what I would do if I were collaborating with a member of our team. In essence, I had simply transferred my coworker behavior to my coworking relationship with AI.
It's not surprising. AI speaks to us in language, responds instantly, and mirrors conversation. It feels natural to interact with it the way we interact with people. And because the experience is designed to feel human, we bring with us the same expectations we use in human-to-human interactions.
And while this human-like design makes it easy to engage with AI, it does not make AI human. It may feel familiar, but it does not operate with the same assumptions and expectations that shape our interactions with other people.
This becomes visible in simple, everyday moments.
You ask a question, and within seconds you get a clear, well-written response. It sounds confident and complete—almost like something a very reliable and highly competent colleague would say. And more often than not, that's enough. You move on.
Because in most human interactions, a good answer—especially one delivered with confidence—signals that the thinking is done, and that there is little reason to question it.
But in this case, there is reason to pause.
AI is designed to provide quick and confident responses, even when they are not fully grounded in solid facts, verified data, or unquestionable truths. It gives the signals that create a perception of reliability, while the underlying reasoning may be less stable than it appears.
Why is this interesting from a behavioral point of view?
On one side, the design of the AI experience is intentionally based on human-to-human interaction, making it easy and intuitive to engage.
On the other hand, the principles that characterize the "behavior" of AI are fundamentally different from those that guide human coworker behavior.
This creates a subtle but important gap and raises a broader question:
So how do we manage our behavior in this AI × HI relationship—so it is not shaped by default assumptions or inherited habits, but by a more deliberate and thoughtful approach?
One that maximizes value for us as individuals, for our work, and for the systems we operate in.
So if we are indeed in a new kind of relationship, it may require a slightly different way of showing up in it.
A useful way to think about this is to be a bit more intentional in how we engage with AI—not as a coworker who thinks like us, but as a system that responds to how we think and interact with it, and that may carry its own biases that need to be understood and corrected.
At the same time, some of our existing behaviors still have value.
Continuing to apply simple principles of coworker politeness—like saying "please" and "thank you"—may not matter to the AI, but it matters to us. It shapes the tone of the interaction and contributes to a sense that we are operating in a respectful and constructive working environment.
And perhaps that is part of what this new relationship is about.
This article is the starting point of a broader exploration.
Together with Vlad Sterngold, I will be exploring what it means to build a truly effective symbiosis between human and artificial intelligence—not only from a technical perspective, but from a deeply human one.
Care to join us on this journey?
*This article reflects an AI × HI collaboration. The ideas and perspective are my own, developed through iterative interaction with AI as a thinking partner to sharpen structure, clarity, and expression.