Note: A modified version of this piece is included in the new book Tech Humanist.

Let’s say you have a question regarding your bank account, so you visit your bank’s website to find a phone number to call them. While you’re looking for that, a chat box pops up in the corner of the window, asking if you’d like to chat live with an agent for help. “Why not?” you think to yourself, and click to start the conversation.

So far so good, but let’s say your question is how you can protect your account from your ex who may still have access. You explain the situation and the agent asks you questions about your account, eventually recommending that you create a new, separate account into which you can move your money.

At any point, are you concerned that the agent has perhaps not fully understood your situation before making this recommendation? Would your comfort level change if you knew that the agent was, in fact, a rules-based, machine-driven chatbot?

Let’s pause to define some terms. For the purposes of this discussion, a bot is the conversational interface for a product or a brand or other entity that uses programmed logic and, in some cases, machine learning to determine how to interact given a specified topic or function, like placing an order, initiating a customer support request, etc.

It’s you interacting with a machine in a very conversational manner.

So in the example above, should the bank have disclosed that you were chatting with a bot? Should it be required to? If so, at what point? At the beginning, or only once your interaction reached a certain level of sophistication?

In other words, would it be OK if the extent of your interactions with the bot were only about logging in to the website and retrieving a forgotten password?

Various banks have tried this. The ability to automate the interaction solving one of the most frequent and simplest customer support issues has offered some banks considerable potential return. And customers don’t seem to feel too bothered about it; after all, it’s not that different from clicking a “forgot password” link and having a system email a password reset link to you.

Consumers Want Results

In a 2016 study from Aspect, a company that works in customer service optimization, over 70 percent of consumers surveyed said they wanted the ability to solve most customer service issues on their own. Almost half said they’d prefer to conduct all customer service interactions via text, chat, or messaging “if the company could get it right.” And most indicated that they already interact with an intelligent assistant or chatbot at least once per month.

In fact, several experiments from Goldsmiths University and global media agency Mindshare concluded that most consumer attitudes seem to fall somewhere between “would consider” to “would prefer” communicating with a chatbot while interacting with a business or brand.

But there’s a catch: Almost half from the Goldsmiths and Mindshare results said it would feel “creepy” if a bot pretended to be human.

Even at this early stage, there’s an expectation that if you interact with a brand’s bot, that you’re dealing with a machine and it should feel like you’re talking to one. If you’re offered fixed options and limited syntax, most people with any familiarity with bots would assume that they’re interacting with a machine and would be surprised if they learned that a human was involved at all.

The implication seems to be that consumers prefer whichever option, bot or human, takes care of their issue, but they don’t want to feel deceived in the process, and the distinction should be clear.

For perspective, I asked Ian Barkin, who is co-founder & Chief Strategy Officer at Symphony Ventures, a consulting firm focused on automation and the future of work, as well as a member of the IEEE Working Group on Intelligent Process Automation. “My PoV has always been that bots ‘enable people to do their best work.’ So, there should be no shame in divulging when/where a bot is doing the routine so that, when you need the real hand-holding and support, there are good people who can spend the right amount of time supporting you.”

But part of the challenge in transitioning to a more machine-scaled economy is that we have such a divided understanding of human vs. machine roles in the larger cultural conversation surrounding automated interactions. In reality, as Barkin says, a lot of customer support and technical support environments are not exclusively one or the other, but rather human plus automation, where the most frequently asked questions or questions with the simplest answers are where automation scripts begin, with humans nearby to fill in around the edge cases.

This leaves the waters a bit murky when it comes to considering where and when a bot should have to disclose that it’s a bot. For clarity, perhaps it’s worth pondering why we might think we need to know when a bot is a bot. The implication is that there’s risk, so what exactly is it that’s at risk here?

Humans Are Hesitant to Go All In

Almost everyone I asked was initially bullish on bots for the sake of efficient customer service interactions, but softened and began to express reservations when they considered the example of interacting with a bot without knowing that it was such.

None of us exist purely as consumers all of the time. It’s a role we inhabit from moment to moment, just like a parent, student, user, visitor, and so on. We’re humans, first and foremost, and our interactions with our fellow humans are sophisticated, nuanced, emotionally intelligent, and rich. So part of what we envision when interacting with a human customer service agent is a match of human wit against human wit — a “fair fight.” Intuitively, we may feel we can use our instinct, guile, and cunning to persuade a human to resolve an issue in our favor, or to resist a human’s efforts to upsell or persuade us toward a particular outcome. We can use deductive reasoning, emotional intelligence, and all of the other tools we may have within us to deal with these situations. But pitted against a machine with — at least theoretically — access to vast arrays of data from which to draw patterns, with unyielding algorithmic logic, and potentially neural network learning systems, the scales now seem tipped against us.

To be clear though, at the moment, most chatbots aren’t artificial intelligence-driven, but rather rules-based processing — so in a support setting, they’re fundamentally following the same rules a human agent would have to follow. What this means is that it may make little difference in execution whether a bot or human is conducting the interaction.

Not always very sophisticated rules-based processing, either

Still, for most of the people I surveyed, it feels like we deserve to know when we’re dealing with a machine that’s mimicking human interactions.

This hesitation may be due to the fact that we don’t have a lot of practice yet at interacting with machines in situations where we typically rely on humans, and thus where there’s ambiguity in the interaction.

After all, most of the automation that’s been introduced into our interactions over the years has been pretty overt: We know that when we’re at an ATM and we aren’t dealing with a human teller. There’s no sweet-talking the ATM into giving you $5 bills; it’s going to spit out $20s and you’re going to deal with it.

Reality Is Complicated

In developing these systems, there is sometimes a transitional stage where interactions are scripted but humans drive the “back end” of the interaction. This always reminds me of Moviefone Kramer: the episode of Seinfeld where Kramer gets a new phone number and it’s just one digit off from Moviefone, the pre-internet era automated phone service that you could call to determine movie showtimes. When George calls Kramer thinking he’s called Moviefone, and Kramer can’t decipher George’s touchtone entries, he amends the instructions: “Why don’t you just tell me what movie you’d like to see?”

This is the sort of inverse uncanny valley of automated interaction: where a human interacts with a system expecting a machine and gets a human instead. So there’s not always a clear-cut distinction between when you’re interacting with a human and when an automated process has intervened.

There’s also a tremendous range of application, each of which has different considerations, different sensitivity needs for and a sense of human-like respect. Retail, for example, may be one of the more obvious, and in some ways least consequential, applications of conversational bots. But when you really think about the many contexts in which a bot could be interacting with a human, it’s nearly endless: financial services, healthcare, traveler support, entertainment, public safety, education, even therapy. So if these interactions all potentially need some kind of disclosure that they are bot-based, the line would be pretty blurry about when it needs to be disclosed and when it doesn’t.

Each of these contexts may have varying practical considerations that would determine whether disclosure is important. So before any regulations might be proposed, there would have to be a pretty thorough assessment of the breadth of opportunities and how they might impact humans.

In the meantime, we can probably assume that more and more of our interactions are going to be augmented by automation and machine learning in some way. What’s important for companies to take into account while developing their intelligent assistant programs, beyond disclosure, is to design with human need in mind. For example, they should ensure that the transition from bot to human is seamless: 88 percent of customers said they expect a live agent who steps in after a bot begins the interaction to have all of the context — name, account number, etc. — that the customer has already provided. It’s about efficiency, sure, but it’s also about respect and human consideration, and no matter what, we can always use more of that.