Behaviour https://smarterpatterns.com/taxonomy/term/1 en Hand-off to human https://smarterpatterns.com/patterns/161/hand-human <span>Hand-off to human</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 09/09/2019 - 18:21</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong><br /> Sometimes, an AI system doesn't work as the user wants it to or they're not comfortable using an AI driven system.</p> <figure><img alt="Screenshot of a bot hand-off in an airline app" data-entity-type="file" data-entity-uuid="a5e80bd4-b6ca-419e-897a-32b7251e5481" src="/sites/default/files/content-images/KLM_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>An airline chatbot allows the user to switch to direct human interaction when it is unable to complete the task as requested.</figcaption> </figure> <p><strong>Solution:</strong><br /> The system should provide means to hand the process over to a human agent. The user and the human agent can complete the process either via live chat in the app, or offline via phone or showroom.</p> <p><strong>Discussion:</strong><br /> Obviously prioritising a hand-off to human agents reduces the efficiencies that automation brings to business processes. But there will always be categories of complex issues that fall outside of an AI's capabilities, which human agents are better able to solve. Beyond complexity, where a relationship requires empathy, passion, emotion, or another form of authentic human connection, simulating this via AI is still a greater challenge than simply employing human agents to make that connection with the user.</p> <figure><img alt="Another screenshot of a chatbot giving the user the option to chat with a human instead of a bot" data-entity-type="file" data-entity-uuid="ca9c9712-e0b0-4a68-bfa1-bf5cd3969a27" src="/sites/default/files/content-images/AirFrance_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>Airlines are doing a good job of handing customers off to a real person when a virtual agent is unable to fulfill a task.</figcaption> </figure> </div> Mon, 09 Sep 2019 18:21:44 +0000 leighbryant 161 at https://smarterpatterns.com Appropriate Magic https://smarterpatterns.com/patterns/156/appropriate-magic <span>Appropriate Magic</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 08/26/2019 - 19:45</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><br /> <strong>Problem:
</strong></p> <p>A user wants to be awed, wowed, and amazed by a piece of software. Sidetracking to explain its workings can disrupt the flow of the experience and spoil the fun.</p> <figure><img alt="A screenshot of a conversational interaction with the &quot;WoeBot&quot;" data-entity-type="file" data-entity-uuid="ca7d0aa2-b3e2-49df-82b3-809fba9e333b" src="/sites/default/files/content-images/WoeBot_Appropriate_Magic_Edited_0.png" style="width:100%" /> <figcaption>The "WoeBot" encourages the user to think of it as human-like, to encourage a more natural, personal rapport, without diving into detail about how the AI that supports that persona works.</figcaption> </figure> <p><strong>Solution:
</strong></p> <p>When appropriate, the system should obfuscate the underlying algorithm and instead use playful language to suggest that the system is more than just a math machine—&nbsp;it's magical!&nbsp;</p> <p><strong>Discussion:
</strong></p> <p>This (obviously) is contrary to all other patterns around user education and transparency&nbsp;and so should be handled with care. That said, whatever we're designing&nbsp;we should be making deliberate choices about the presentation layer of an AI app. Even if the UI does little to try and steer the user toward&nbsp;a certain conceptual framework, the user builds a mental model regardless, which may be very different to the designer's&nbsp;intent. Whether it's transparent, magical, or otherwise, designers should see the opportunity to take the initiative in facilitating this mental model and not leave it to chance.&nbsp;
<br /> &nbsp;</p> </div> Mon, 26 Aug 2019 19:45:53 +0000 leighbryant 156 at https://smarterpatterns.com Gender Neutral Bot https://smarterpatterns.com/patterns/131/gender-neutral-bot <span>Gender Neutral Bot</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 17:55</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The user wants to interact with a bot that isn't characterized as&nbsp;being distinctly male or female.&nbsp;</p> <figure><img alt="Example of gender neutral options" data-entity-type="file" data-entity-uuid="b02c31dd-afb0-4bbf-bfc8-6195b530d675" src="/sites/default/files/content-images/Gender_Neutral_Bot-replika_0.png" /> <figcaption>An app with a user-generated bot profile allows the user to choose non-binary options for the persona being created.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The bot character is portrayed&nbsp;as gender neutral, which can be&nbsp;conveyed via the pronouns it uses to refer to itself, the tone of voice it uses, the visual imagery or iconography, etc.</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Is it a problem for the user if a bot is understood as male or female? Not directly in terms of task completion&nbsp;and a lot of the time it may have little impact on a user's impression of an app. But the perception of gender varies, so if we want to be in full control of the impression our bot is making and mitigate risks, we should avoid overt gender coding altogether. There's also a moral imperative not to contribute to harmful gender stereotypes (for example, by aligning a subservient bot with a female gender or a powerful one with male).</p> <p>Focusing on gender is just one way to think about these issues; there are many other ways in which human-like traits in a bot can cause issue. Users who align with the traits may feel like their identities are being parodied or otherwise reduced to a stereotype, and users who don't align with the traits may feel they're not the intended audience and/or that the system is designed so they're dissuaded from greater participation. Steering clear of human-like traits altogether can reduce risk&nbsp;and is an effective way to avoid the Uncanny Valley effect if nothing else. &nbsp;<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 17:55:23 +0000 leighbryant 131 at https://smarterpatterns.com Chat Presets https://smarterpatterns.com/patterns/101/chat-presets <span>Chat Presets</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 17:06</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">There are lots of potential conversational paths with a chatbot and sometimes the user will type something weird that breaks the intended conversation flow.</p> <figure><img alt="Example of preset chat prompts that encourage engagement but don't require massive manual input on the part of the user" data-entity-type="file" data-entity-uuid="8c03eae7-581f-4ee4-9385-0dd5ed4f30bf" src="/sites/default/files/content-images/Chat_Presets-wysa_2.png" /> <figcaption>An app offers manual text input, pre-populated short answer suggestions, and suggested options for larger, more informational outputs with minimum user input required.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Just because an app is a chatbot doesn't mean that all responses need to be in the form of text entry. The bot can prompt the user to select a binary yes / no answer, select one from a range of preset answers, grab a slider, or even select an image or other media item as their response.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A UI that blends chatbot text entry with other input mechanisms can be the best of both worlds, allowing the presentation of conventional UI elements at strategic times while remaining embedded in a conversation that provides context and motivation for data entry.<br /> &nbsp;</p> <p dir="ltr"><strong>Other Examples:</strong></p> <figure><b id="docs-internal-guid-90fe46a7-7fff-0a8e-5fa1-4cf6e91fe6d8"><img alt="Example of chat prompt button to encourage engagement without requiring massive manual input" data-entity-type="file" data-entity-uuid="acecdd0f-fa85-474d-8305-91887ef249c2" src="/sites/default/files/content-images/Chat_Presets-youper_0.png" /></b> <figcaption>An application offers pre-set text to help direct conversation and limit chances of breaking the conversational flow.</figcaption> </figure> <p dir="ltr">&nbsp;</p> </div> Wed, 21 Aug 2019 17:06:33 +0000 leighbryant 101 at https://smarterpatterns.com Bot Conversation Starter https://smarterpatterns.com/patterns/96/bot-conversation-starter <span>Bot Conversation Starter</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 17:05</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">When confronted with an open-ended UI such as the text box of a chatbot, it can be bewildering for the user to know where to start.</p> <figure><img alt="Example of bot conversation starter with a binary yes/no prompt to help guide the user through a natural conversation flow" data-entity-type="file" data-entity-uuid="cccde5da-87ba-419a-8c5f-0a908516d595" src="/sites/default/files/content-images/Bot_Conversation_Starter.png" /> <figcaption>A chatbot application provides a prompt question that helps direct the conversational flow for the user to help guide it through the functions of the app.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>On first use, the bot asks questions that prompt predictable responses from the user, guiding feature discovery and setting user expectations around the user experience to follow.</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Interacting with bots is still a relatively new experience for many, so curating pathways for new users is vital for creating a good first impression.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 17:05:02 +0000 leighbryant 96 at https://smarterpatterns.com Dark Pattern: Faked AI https://smarterpatterns.com/patterns/66/dark-pattern-faked-ai <span>Dark Pattern: Faked AI </span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:47</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know when real AI is deployed and when it's not. They especially don't want to be tricked into believing AI is used to process data when it's actually a manual process with human agents acting behind the scenes.&nbsp;</p> <figure><img alt="Screenshot of an app message saying data will be &quot;extracted automatically by end of the day&quot;, which implies AI but more likely is being done by a human or other non-AI alternative based on the length of time it takes for the operation." data-entity-type="file" data-entity-uuid="e375fb26-c544-4569-91a9-c2c8fc60bb6b" src="/sites/default/files/content-images/Faked_AI-autofyle.png" /> <figcaption>The likelihood that a true AI-powered app would take this time is suspect, but the application does not acknowledge it and users are left feeling uncertain about whether they AI is less powerful than they thought, or if it isn't AI at all but instead relies on human intervention behind the scenes.</figcaption> </figure> <p><strong>Dark pattern response:</strong></p> <p>Whether via a non-learning algorithm masquerading as one capable of machine learning, or via human processing, the system pretends it's using AI when it really isn't.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Users are increasingly literate in what AI-applications are capable of and what they cannot reasonably achieve. So if, for example, your app claims to have advanced optical character recognition (OCR) on borderline illegible items but a twenty-four hour turnaround time for processing, then a savvy user will immediately be suspicious that all is not as it seems and perceive that as a betrayal of trust. The end-user would probably not mind the difference between the two (human intervention vs AI) as long as the expected end result is achieved in a timely fashion. And if, in the example given, a twenty-four lag really is required for AI processing, then designers should anticipate the user's suspicion and address it via <a href="/patterns/21/setting-expectations-acknowledging-limitations">Setting Expectations &amp; Acknowledging Limitations</a>.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 16:47:23 +0000 leighbryant 66 at https://smarterpatterns.com