It’s easy to get caught up in the hypestorm that surrounds machine learning (ML) and the broader notion of artificial intelligence. Like any interesting, newly-accessible technology, AI could easily become the next impetus to create hoards of useless stuff — sexy solutions to problems that no one really has.
It’s also easy to treat these technologies with unfair cynicism. The UX community has rightfully mobilized to point out the drawbacks of building something just because we can. It’s certainly important (and at times, shamefully gratifying) to be the clever design thinker who sits back and asks, “So what problem are we really trying to solve?” But for cases when yes, AI is actually a good solution, we need a way to organize our thinking. We need to be mindful of all the UX implications that arise when technology acts on a user’s behalf.
To help our teams get up to speed on the key considerations for designing intelligent products (whether truly based in ML or not), we’ve been working to synthesize our own thoughts with some of the great ideas we’ve found in new articles, books, and discussion forums this year. Because we’re fans of visual thinking tactics, we’ve captured our findings in a canvas we’ve casually dubbed the “Smart Things Canvas” (I know — so inspired!)
If you’ve just been tasked with designing a great user experience for something “smart,” this canvas will equip you both to plot out the key facets of the end user experience, and to think critically about how the underlying machine will constrain or enable different types of interactions — all without the need for a degree. We’ve just started using this canvas in our own work, but if the fearmongering is true, our community needs to move fast to get every designer (and the people we work with) thinking critically about this stuff. So we’re sharing this as a work in progress, and we hope you’ll find it useful.
What are the premises that underpin the canvas?
- Both human cognition and machine cognition can be boiled down to a see-think-do loop. We can consider how the overall smart product will see, think, and do; likewise, for each of the product’s touchpoints, we can examine what a user sees, then thinks, then does. Credit where credit is due on this one: A big portion of the skeletal structure, and a good portion of the considerations within, are based in Chris Noessel’s ideas from Designing Agentive Technology. Chris is a superb thinker from IBM and he’s written a highly-accessible, practical guide. Go read that book.
- Intelligence isn’t an all-or-nothing affair. We believe that things, like human beings, might seem smarter in some areas than others, so understanding how users will perceive the intelligence of a thing is more important than debating whether or not something is truly artificially-intelligent. This also means that the canvas can be used for tools that have no ML involved whatsoever.
- Designers should have at least a working understanding of what makes data easier or harder for machines to work with in scenarios where algorithms are involved.
What’s the canvas good for?
- Thinking holistically about what typically goes into a smart thing or a smart interface for a thing.
- Thinking about the human actions for different interactions with the smart thing.
What isn’t the canvas good for?
- Planning out the specifics of a particular interaction. We were tired of receiving “best practices for CUI” articles as an answer to the question of how to consider the experience of smart things. We wanted something that accounted for a deeper understanding of what including “smart” technology meant, especially when you consider smart things might have stupid interfaces, and vice-versa. This canvas helps us think more broadly about the design of smart products, but isn’t intended for the nitty-gritty of implementing a singular feature.
What do we need to improve?
- It’s a bit unwieldy. In an attempt to be comprehensive, we’ve incorporated a lot. It’s a bit of a glorified checklist at the moment, so we’ll need to simplify things in our next iteration.
- It’s new, we don’t know what we don’t know yet, but as we uncover things we’ll refine this tool. If you get a chance to try it out, we’d love feedback on ways to improve it as well.
With that said, let’s take a quick tour through the canvas. Broadly speaking, the canvas contains major blocks for considerations that affect the machine, and ‘sub blocks’ for considering the user’s cognitive loop in the context of the machine’s operation.
Overview of the Smart Things Canvas
Section 1: Setup
In the setup column, we’re interested in articulating everything a smart thing needs in order to get set up and running. The smart thing needs a set of rules on how to operate — things to watch for and actions to take when conditions are met — limited by its general capabilities. Before the user engages the system, it should also simulate how things will go when the user hits play.
Section 2: See
How will your product take in data? This section is a great place to plug in the “Infer” section of the Periodic Table of AI, noting that each of the mechanisms that can help a machine observe the world (whether real or virtual) are instances of ‘smart things’ themselves.
Section 3: Think
Equipped with at least a conceptual understanding of code, designers can generate more creative and feasible solutions. We feel it’s important that product people of all stripes (not just engineers) understand, at least conceptually, how a smart thing might process information. This helps the entire team think more critically about the possible outcomes a smart thing can deliver. In the “Think” section, we’re mainly concerned with factors that contribute to data quality, and the actual type of data processing in play.
Section 4: Do
In the “Do” section, our canvas provides space to explore the actions a machine can take, both in regular operation and in emergency situations, such as a prolonged power outage.
Section 5: Disengage
How do we know when a smart thing has worn out its welcome? The Disengage section provides space to consider all of the triggers that might help a user understand when a machine is no longer helpful, or help a machine understand when a user no longer needs it.
Bonus: The Intelligence Graph
In Designing Agentive Technology, Noessel employs the term “agentive” to describe technology that acts on humans’ behalf to achieve some overall end goal (like driving you home), with only some nominal amount of user input. He compares agentive tech to assistive technologies, which merely support user task completion (like making wayfinding easier for a driver), and fully autonomous tech, which operates without human input. The Intelligence Graph is to help you draw distinctions among the three types of “smart” and to determine how your smart thing will be used.
With our canvas, we’re intentionally distorting and complicating (and in some ways contradicting) his definition. In our model, every major slice of a tool constitutes a subgoal with varying degrees of independence from user input. That means that portions of a tool can be considered agentive, while others merely intuitive. We feel that this more granular approach will stimulate deeper discussions about how and when your system will seem intelligent to users.
So that’s that. We hope you find the Smart Things Canvas useful as a starting point. We’ve included both a blank canvas and the ‘cheat sheet’ we use to stimulate discussions in the link out. The cheat sheet can be useful for ideation, while the canvas itself is more effective for really planning out the experience of a smart thing. Let us know if you find the canvas helpful and please provide feedback so we can continue to iterate!
Designing Agentive Technology
Be sure to share the post if you found the article and Smart Things Canvas useful! And let us know what you think about it or if you’ve got any suggestions for improvement in the comments.