Behaviour en Appropriate Magic <span>Appropriate Magic</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Tue, 09/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:
</strong></p> <p>A user wants to be awed, wowed, and amazed by a piece of software. Sidetracking to explain its workings can disrupt the flow of the experience and spoil the fun.</p> <figure><img alt="A screenshot of a conversational interaction with the &quot;WoeBot&quot;" data-entity-type="file" data-entity-uuid="ca7d0aa2-b3e2-49df-82b3-809fba9e333b" src="/sites/default/files/content-images/WoeBot_Appropriate_Magic_Edited_0.png" style="width:100%" /> <figcaption>The "WoeBot" encourages the user to think of it as human-like, to encourage a more natural, personal rapport, without diving into detail about how the AI that supports that persona works.</figcaption> </figure> <p><strong>Solution:
</strong></p> <p>When appropriate, the system should obfuscate the underlying algorithm and instead use playful language to suggest that the system is more than just a math machine—&nbsp;it's magical!&nbsp;</p> <p><strong>Discussion:
</strong></p> <p>This (obviously) is contrary to all other patterns around user education and transparency&nbsp;and so should be handled with care. That said, whatever we're designing&nbsp;we should be making deliberate choices about the presentation layer of an AI app. Even if the UI does little to try and steer the user toward&nbsp;a certain conceptual framework, the user builds a mental model regardless, which may be very different to the designer's&nbsp;intent. Whether it's transparent, magical, or otherwise, designers should see the opportunity to take the initiative in facilitating this mental model and not leave it to chance.&nbsp;
<br /> &nbsp;</p> </div> Tue, 15 Sep 2020 00:00:00 +0000 leighbryant 156 at Bot Conversation Starter <span>Bot Conversation Starter</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 04/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">When confronted with an open-ended UI such as the text box of a chatbot, it can be bewildering for the user to know where to start.</p> <figure><img alt="Example of bot conversation starter with a binary yes/no prompt to help guide the user through a natural conversation flow" data-entity-type="file" data-entity-uuid="cccde5da-87ba-419a-8c5f-0a908516d595" src="/sites/default/files/content-images/Bot_Conversation_Starter.png" /> <figcaption>A chatbot application provides a prompt question that helps direct the conversational flow for the user to help guide it through the functions of the app.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>On first use, the bot asks questions that prompt predictable responses from the user, guiding feature discovery and setting user expectations around the user experience to follow.</p> <p><strong>Discussion:</strong></p> <p>Interacting with bots is still a relatively new experience for many, so curating pathways for new users is vital for creating a good first impression.<br /> &nbsp;</p> </div> Wed, 15 Apr 2020 00:00:00 +0000 leighbryant 96 at Chat Presets <span>Chat Presets</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 03/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">There are lots of potential conversational paths with a chatbot and sometimes the user will type something weird that breaks the intended conversation flow.</p> <figure><img alt="Example of preset chat prompts that encourage engagement but don't require massive manual input on the part of the user" data-entity-type="file" data-entity-uuid="8c03eae7-581f-4ee4-9385-0dd5ed4f30bf" src="/sites/default/files/content-images/Chat_Presets-wysa_2.png" /> <figcaption>An app offers manual text input, pre-populated short answer suggestions, and suggested options for larger, more informational outputs with minimum user input required.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Just because an app is a chatbot doesn't mean that all responses need to be in the form of text entry. The bot can prompt the user to select a binary yes / no answer, select one from a range of preset answers, grab a slider, or even select an image or other media item as their response.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A UI that blends chatbot text entry with other input mechanisms can be the best of both worlds, allowing the presentation of conventional UI elements at strategic times while remaining embedded in a conversation that provides context and motivation for data entry.<br /> &nbsp;</p> <p dir="ltr"><strong>Other Examples:</strong></p> <figure><b id="docs-internal-guid-90fe46a7-7fff-0a8e-5fa1-4cf6e91fe6d8"><img alt="Example of chat prompt button to encourage engagement without requiring massive manual input" data-entity-type="file" data-entity-uuid="acecdd0f-fa85-474d-8305-91887ef249c2" src="/sites/default/files/content-images/Chat_Presets-youper_0.png" /></b> <figcaption>An application offers pre-set text to help direct conversation and limit chances of breaking the conversational flow.</figcaption> </figure> <p dir="ltr">&nbsp;</p> </div> Sun, 15 Mar 2020 00:00:00 +0000 leighbryant 101 at Gender Neutral Bot <span>Gender Neutral Bot</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Sat, 12/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The user wants to interact with a bot that isn't characterized as&nbsp;being distinctly male or female.&nbsp;</p> <figure><img alt="Example of gender neutral options" data-entity-type="file" data-entity-uuid="b02c31dd-afb0-4bbf-bfc8-6195b530d675" src="/sites/default/files/content-images/Gender_Neutral_Bot-replika_0.png" /> <figcaption>An app with a user-generated bot profile allows the user to choose non-binary options for the persona being created.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The bot character is portrayed&nbsp;as gender neutral, which can be&nbsp;conveyed via the pronouns it uses to refer to itself, the tone of voice it uses, the visual imagery or iconography, etc.</p> <p><strong>Discussion:</strong></p> <p>Is it a problem for the user if a bot is understood as male or female? Not directly in terms of task completion&nbsp;and a lot of the time it may have little impact on a user's impression of an app. But the perception of gender varies, so if we want to be in full control of the impression our bot is making and mitigate risks, we should avoid overt gender coding altogether. There's also a moral imperative not to contribute to harmful gender stereotypes (for example, by aligning a subservient bot with a female gender or a powerful one with male).</p> <p>Focusing on gender is just one way to think about these issues; there are many other ways in which human-like traits in a bot can cause issue. Users who align with the traits may feel like their identities are being parodied or otherwise reduced to a stereotype, and users who don't align with the traits may feel they're not the intended audience and/or that the system is designed so they're dissuaded from greater participation. Steering clear of human-like traits altogether can reduce risk&nbsp;and is an effective way to avoid the Uncanny Valley effect if nothing else. &nbsp;<br /> &nbsp;</p> </div> Sat, 15 Dec 2018 00:00:00 +0000 leighbryant 131 at Hand-off to Human <span>Hand-off to Human</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 10/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong><br /> Sometimes, an AI system doesn't work as the user wants it to or they're not comfortable using an AI driven system.</p> <figure><img alt="Screenshot of a bot hand-off in an airline app" data-entity-type="file" data-entity-uuid="a5e80bd4-b6ca-419e-897a-32b7251e5481" src="/sites/default/files/content-images/KLM_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>An airline chatbot allows the user to switch to direct human interaction when it is unable to complete the task as requested.</figcaption> </figure> <p><strong>Solution:</strong><br /> The system should provide means to hand the process over to a human agent. The user and the human agent can complete the process either via live chat in the app, or offline via phone or showroom.</p> <p><strong>Discussion:</strong><br /> Obviously prioritising a hand-off to human agents reduces the efficiencies that automation brings to business processes. But there will always be categories of complex issues that fall outside of an AI's capabilities, which human agents are better able to solve. Beyond complexity, where a relationship requires empathy, passion, emotion, or another form of authentic human connection, simulating this via AI is still a greater challenge than simply employing human agents to make that connection with the user.</p> <figure><img alt="Another screenshot of a chatbot giving the user the option to chat with a human instead of a bot" data-entity-type="file" data-entity-uuid="ca9c9712-e0b0-4a68-bfa1-bf5cd3969a27" src="/sites/default/files/content-images/AirFrance_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>Airlines are doing a good job of handing customers off to a real person when a virtual agent is unable to fulfill a task.</figcaption> </figure> </div> Mon, 15 Oct 2018 00:00:00 +0000 leighbryant 161 at Relationship Building <span>Relationship Building</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>mattiealston</span></span> <span>Sat, 07/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>The user wants to trust an AI system but finds the concept of an artificial intelligence worrying.</p> <figure><img alt="A screenshot of the chatbot from the Replika app." data-entity-type="file" data-entity-uuid="fc83ec56-63ca-40e5-9567-7a7cd73a86f3" src="/sites/default/files/content-images/Relationship_building.png" style="width:100%" /> <figcaption>The bot in Replika is designed to learn about the user and adapt its behaviour accordingly, with the aim of building a real relationship with them.</figcaption> </figure> <p><b>Solution:</b></p> <p>Using appropriately memorised user details and a friendly tone, over repeated interactions the system works to build a relationship with the user and alleviate their anxiety.&nbsp;</p> <p><b>Discussion:</b></p> <p>From Skynet to the Matrix, we're bombarded with the idea of evil computer intelligences taking over the world. It's no surprise, then, that many users will view AI-driven software with suspicion, especially in anything that approaches our sci-fi notions of AI-like conversational interfaces and robots. Establishing the correct tone is vital:&nbsp;even friendly and helpful can appear to be sinister. Hence the importance of building a relationship based on honesty that can increase in trust over time, just as the user would do with any new person, artifact, or institution.</p> </div> Sat, 15 Jul 2017 00:00:00 +0000 mattiealston 326 at Upgradable Algorithm <span>Upgradable Algorithm</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>mattiealston</span></span> <span>Sun, 01/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>The user wants to try out a service with a more limited version of the operating AI, either because it is free to use, or because it is less resource intensive. They then want to upgrade to the more powerful AI as needed.</p> <figure><img alt="A screenshot from the PictureThis app showing an option to pay for a more power algorithm." data-entity-type="file" data-entity-uuid="20f99629-8364-4e54-8a17-58132c9839d4" src="/sites/default/files/content-images/Algorithm_upgrade.png" style="width:100%" /> <figcaption>PictureThis offers a premium option with a more powerful algorithm.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system provides controls to switch between different versions of the AI. The inferior may have fewer features, less accuracy, or a more limited dataset. The user can either freely switch between versions (to manage their computing resources), or alternatively unlock a more powerful algorithm via a paid premium upgrade.</p> <p><b>Discussion:</b></p> <p>In cases where the system does provide a choice of AI power levels, it is important to communicate to the user exactly what the limitations of the lower level or benefits of the higher level are. It is also advisable to offer realistic recommendations on appropriate level for the user — often the higher power algorithm may actually be unnecessary for the user’s tasks, and be inefficient compared to the lower powered version.</p> </div> Sun, 15 Jan 2017 00:00:00 +0000 mattiealston 261 at Dark Pattern: Faked AI <span>Dark Pattern: Faked AI </span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Sat, 10/15/2016 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know when real AI is deployed and when it's not. They especially don't want to be tricked into believing AI is used to process data when it's actually a manual process with human agents acting behind the scenes.&nbsp;</p> <figure><img alt="Screenshot of an app message saying data will be &quot;extracted automatically by end of the day&quot;, which implies AI but more likely is being done by a human or other non-AI alternative based on the length of time it takes for the operation." data-entity-type="file" data-entity-uuid="e375fb26-c544-4569-91a9-c2c8fc60bb6b" src="/sites/default/files/content-images/Faked_AI-autofyle.png" /> <figcaption>The likelihood that a true AI-powered app would take this time is suspect, but the application does not acknowledge it and users are left feeling uncertain about whether they AI is less powerful than they thought, or if it isn't AI at all but instead relies on human intervention behind the scenes.</figcaption> </figure> <p><strong>Dark pattern response:</strong></p> <p>Whether via a non-learning algorithm masquerading as one capable of machine learning, or via human processing, the system pretends it's using AI when it really isn't.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Users are increasingly literate in what AI-applications are capable of and what they cannot reasonably achieve. So if, for example, your app claims to have advanced optical character recognition (OCR) on borderline illegible items but a twenty-four hour turnaround time for processing, then a savvy user will immediately be suspicious that all is not as it seems and perceive that as a betrayal of trust. The end-user would probably not mind the difference between the two (human intervention vs AI) as long as the expected end result is achieved in a timely fashion. And if, in the example given, a twenty-four lag really is required for AI processing, then designers should anticipate the user's suspicion and address it via <a href="/patterns/21/setting-expectations-acknowledging-limitations">Setting Expectations &amp; Acknowledging Limitations</a>.<br /> &nbsp;</p> </div> Sat, 15 Oct 2016 00:00:00 +0000 leighbryant 66 at Dark Pattern: Faked Human <span>Dark Pattern: Faked Human</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>mattiealston</span></span> <span>Thu, 09/15/2016 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>Users want to know if they are interacting with a human or with an AI. For example, where a customer service chat interface can be handled by either a human agent or a bot, the user wants&nbsp;to know which one they are talking to.</p> <figure><img alt="An illustration of Google Assistant making a phone call showing the Google bot on one side and a human on the other. " data-entity-type="file" data-entity-uuid="4bfc17a1-6c80-4275-90b4-9673fc44effa" src="/sites/default/files/content-images/duplex_E.png" style="width:100%" /> <figcaption>Google Duplex (part of Google Assistant) makes reservations via phone on your behalf, complete with very human sounding ums, errs, and mm-hmms.&nbsp;</figcaption> </figure> <p><b>Dark Pattern Response:</b></p> <p>The system presents a chat window that is seemingly helmed by a human agent, but behind the scenes is managed by a chatbot. Alternatively, an AI-powered voice interface masquerades as a human.&nbsp;</p> <p><b>Discussion:</b></p> <p>For the sake of presenting a professional image, it can be appropriate for a chat interface to be styled with a fictional human character (e.g., a photo and human name). Often initial interactions will be handled by scripted responses, to be seamlessly handed off to a human agent as needed. In such cases where either a human agent or a chatbot takes on a character, many users intuitively understand that the entity behind the mask may not match the identity presented. Given the terse and often scripted nature of chat interactions, it can genuinely be challenging for a user to determine whether the other party is human or machine.</p> <p>And sometimes the deception is more deliberate. In the case of Google’s Duplex voice assistant, for example, it was designed to sound as human as possible, with built in umming and erring.</p> <p>Given this, it's important that these systems disclose themselves, either upfront or when directly asked by the user. If a chatbot maintains that it is a human agent, and the user believes that, then yes, it passed the Turing test but at the cost of further eroding our trust in the honesty of AIs.&nbsp;</p> </div> Thu, 15 Sep 2016 00:00:00 +0000 mattiealston 231 at Anti-Pattern: Algorithmic Gender Recognition <span>Anti-Pattern: Algorithmic Gender Recognition</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>mattiealston</span></span> <span>Tue, 09/15/2015 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p><b>Problem:</b></p> <p>The user does not want to be misgendered by an AI. In fact, the user does not want a piece of software to make any assumptions about their gender at all.&nbsp;</p> <figure><img alt="A screenshot of a facial recognition demo by Microsoft that predicts the user's gender." data-entity-type="file" data-entity-uuid="aad04b2e-c080-4ca3-b3aa-849e090c1044" src="/sites/default/files/content-images/Gender_recognition_Microsoft.png" style="width:100%" /> <figcaption>This demo by Microsoft is designed to predict gender, age, emotions, and whether the two photos are of the same person.&nbsp;</figcaption> </figure> <p><b>Anti-pattern response:</b></p> <p>An image recognition AI attributes data points to an image of the user, one of which is the user’s gender.&nbsp;Variants of this include AI-powered systems attempting the same prediction via speech recognition, textual analysis, behaviour tracking, and so on.&nbsp;</p> <p><b>Discussion:</b></p> <p>By using data pulled from a collection of individuals, we train AI systems&nbsp;to make predictions about any one individual based on what it learns about the group. In some cases, these generalizations and assumptions will be accurate and useful. In other cases, there are faults in the dataset that can cause problems, either due to biases in the sample data or faulty extrapolation from that data. When the data points concern superfluous details such as whether the user wears glasses or not, that user will not be offended by incorrect data attached to them. When the data concerns matters of gender identity, sexual orientation, racial or cultural identity, or other matters that the user is deeply affected by, much greater care should be taken about generating accurate assumptions.&nbsp;</p> <p>In the case of gender, there are two substantial problems to overcome initially. First, a male/female binary is incorrect and simply adding more options (non-binary, for example)&nbsp;will likely also be an inadequate simplification. Second, assuming we could create a satisfactory catalogue of gender positions, the challenge would be to ensure the dataset captured contains a sampling of enough diversity to be predictively useful. Ultimately though, this functionality is doomed to fail simply&nbsp;because many individuals’ gender presentation doesn’t match their actual gender. If a system deploys this pattern,&nbsp;it will inevitably be inaccurate and cause offence for a number of users.&nbsp;</p> <p>This pattern is indicative of a&nbsp;core challenge&nbsp;in developing AI functionality.&nbsp;Given that almost everything will prove possible to build in due course, the question is not whether we can build it, but whether we should. In this case, whatever benefits there may be must be weighed against the fact that the harm caused is impossible to mitigate.&nbsp;</p> <p>A second core challenge is that when we are concerned with capturing and generating data without a deeper understanding of the nature of that data, we are always in danger of not just replicating existing harmful biases, but amplifying them and solidifying them.&nbsp;</p> </div> Tue, 15 Sep 2015 00:00:00 +0000 mattiealston 306 at