The Uncanny Chatbots: How Human Should They Get?


In case you’ve been living under a rock for the last 18 months or so, there’s something you should know: bots are taking over. (That might be a bit of an extreme statement. But they’re definitely heading in that direction.)

Thanks in large part to WeChat’s many clever integrations, they’re already a huge part of commerce and digital experiences in Asia. Now bots are making their way to North America and Europe. And chatbots, in particular, are poised to become the Next. Big. Thing. But chatbots, for all their potential utility, also raise quite serious ethical considerations.

Previously, we’ve written about the history of and potential uses for chatbots. But we only skimmed the surface of the ethical implications. Today, we’re going to take a closer look at the role gender and personal bias play in the new world of chatbots. We’ll examine not only how our biases influence the bots, but how the bots can do the same to us.

Questions Without Answers

Last week, one of our designers put a question out to our chat channels:

“I wondered if any of you had thought about the gender of AI (i.e. Siri, Cortana, Google Search, Facebook M) and if you had a preference when interacting with them for a male, female, or gender-neutral bot? If yes, why? I’m looking for your personal preferences for interacting with a chatbot in a non-voice command situation, such as texting.”

Immediately people started weighing in with thoughts, opinions, and ideas about their preferred style of chatbot interaction. The comments—about whether human-like AIs or robot-like AIs are better—they flew:

“People trust that a human can help them find an answer, but a robot may not understand their context and won’t reliably be able to point them in the right direction”

“I prefer my AIs for text interactions to be animals or something completely unrealistic (mainly because it’s hilarious) if there is an avatar displayed.”

Comments about folksy interactivity preferences:

“I might be one of the few people who would rather the software behind the bot not pretend to be anything but the series of if-then statements that it is. I don’t want to feel obligated to say “thanks” to a software routine"

Comments about preferred gendered settings:

“To be honest, sometimes I wish I could configure a few app’s gender settings because I just can’t hear men bossing me around any more”

It was a lively discussion. A lot of varied and complex thoughts were aired! A Twitter poll was launched! Second-year university term papers were dug up! This was a topic people were pumped to chat about.

But while we had fun debating and discussing, and the designer who posed the question got a pretty quick snapshot to use as a jumping off point for her research, there was one comment that stuck out more than the others.

During the discussion, one team member asked what would cue someone into the gender of a bot if there was no voice? Would it be the name? The avatar? And then another coworker replied with this:

“You could say the way that sentences are structured and choice of words used. But that’s super subtle and I don’t think most people would be conscious of it.”

Conscious or unconscious, wielding that kind of influence is an important thing for creators of bots to contend with.

Gender Reveal

If you have paid any attention to the seemingly endless flurry of writing about AI, you have probably heard about the issues many AI-powered programs are facing: trouble recognizing faces of certain races, self-selecting that excludes certain groups, unfair targeting of certain groups. These issues are serious.

In many cases, a large part of the issues stems back to who is doing the creating. If only one type of person is building and testing the applications, only one type of user is being given full consideration. It’s not hard to see how the problems spiral out from there. Take the earlier suggestion we mentioned about the subtle, unconscious structuring of sentences.

Beyond the obvious, exclusionary and heavily biased things we occasionally see in new apps, is there a more insidious form of bias creeping in? Are we accidentally imposing genders, ages, or races on bots, even if we’ve tried to exclude them? How does this impact the experience of our users? And how does it impact our understanding of the world around us?

Amy Thibodeau touches on the issues of bias in her piece about how to create “not scary chatbots”. She writes, “Bots are the children of humans with all our limitations, assumptions and biases. They inherit our deficits.” Bots, she is saying, are what we make them.

That means if the people creating the bots fail to recognize their own biases, they end up writing those biases into the code — and into the responses the chatbot generates. But less obviously, and perhaps even more problematically, they also write their own norms and ways of speaking and interacting into them. And more often than not, those writing the bots into existence are men.

“But there’s almost always a problem when a homogenous group builds a system that is applied to people not represented by the builders.” — Tyler Schnoebelen

If a man is the one deciding on the bulk of the interaction, a bot could come across as male, even when it explicitly does not have a defined gender. And this can have far-reaching consequences.

As Tyler Schnoebelen notes, “Gendering digital assistants is problematic… Kate Crawford in the New York Times highlights that artificial intelligence systems that are built by white guys can easily enshrine biases.” Our understandings of gender and race aren’t just influencing the bots. They are actively being influenced by the bots.

In the Kate Crawford article Schnoebelen references, she writes that “Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems.”

Designers and technologists have a responsibility to consider edge-cases and the many, many ways an interaction could go sideways in the moment. But they also must consider the implicit bias of what is being built and the impact of that on the real world over the long haul.

Gender In The Wild

Naturally, Crawford isn’t alone in having identified the far-reaching complications of this issue. In an excellent piece for The Atlantic, Adrienne LaFrance discusses the logic behind and impact of a highly coded form of gendering. She contends with the tendency for assistant-style bots to be named after (and therefore identified as) women.

“If we’re going to live in a world in which we’re ordering our machines around so casually,” she asks, “why do so many of them have to have women’s names?” What are the implications of ordering a woman-bot around instead of a man-bot? And what does it mean that these female-gendered bots are being created mostly by men?

“Is artificial intelligence work about Adams making Eves?” — Tyler Schnoebelen

The answers aren’t easy to uncover or to grapple with. But as we push the boundaries and frontiers of tech, they remain some of the most important ones to consider. Already, there are studies and countless discussions about whether to gender a bot at all. We know. We discovered first-hand how beguiling a topic it is — it’s the question that sparked our lively in-office discussion.

The decision about whether to give a bot a gendered “human” persona or not can be argued for days. There are countless costs and benefits — for the business and the user — to consider. And all of them require careful research to support whatever decision is made.

It will never be cut and dry, as evidenced by Veronica Belmont in her piece on bots and gender. Even when she notes why we would choose to gender bots, she identifies a big pitfall of doing so. “Gendering artificial intelligence makes it easier for us to relate to them, but has the unfortunate consequence of reinforcing gender stereotypes,” she writes.

The same can be said of issues surrounding race and even age. If cooking apps, for example, sound young, with stereotypically white names and male voices, we’ll start identifying culinary experts as habitually young, white, and male. As with so many things in tech, the answer points back to diversity.

Forging The Future

The frontiers of the digital world are changing and expanding, and chatbots are leading the way. As designers and developers, we owe it to our users and the products and experiences we’re building for them to ensure that even as we push boundaries, we remain mindful of our own biases, both explicit and implicit.

As Crawford notes in her piece, “Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included.” It’s time to start thinking more seriously about the values we bring to the table — and about who is bringing them.

 

Written by

Leigh Bryant

Leigh Bryant

Sign up for our newsletter