Transparency &amp; Trust en Hand-off to human <span>Hand-off to human</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 09/09/2019 - 18:21</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong><br /> Sometimes, an AI system doesn't work as the user wants it to or they're not comfortable using an AI driven system.</p> <figure><img alt="Screenshot of a bot hand-off in an airline app" data-entity-type="file" data-entity-uuid="a5e80bd4-b6ca-419e-897a-32b7251e5481" src="/sites/default/files/content-images/KLM_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>An airline chatbot allows the user to switch to direct human interaction when it is unable to complete the task as requested.</figcaption> </figure> <p><strong>Solution:</strong><br /> The system should provide means to hand the process over to a human agent. The user and the human agent can complete the process either via live chat in the app, or offline via phone or showroom.</p> <p><strong>Discussion:</strong><br /> Obviously prioritising a hand-off to human agents reduces the efficiencies that automation brings to business processes. But there will always be categories of complex issues that fall outside of an AI's capabilities, which human agents are better able to solve. Beyond complexity, where a relationship requires empathy, passion, emotion, or another form of authentic human connection, simulating this via AI is still a greater challenge than simply employing human agents to make that connection with the user.</p> <figure><img alt="Another screenshot of a chatbot giving the user the option to chat with a human instead of a bot" data-entity-type="file" data-entity-uuid="ca9c9712-e0b0-4a68-bfa1-bf5cd3969a27" src="/sites/default/files/content-images/AirFrance_HumanHandoff_Edited.png" style="width:100%" /> <figcaption>Airlines are doing a good job of handing customers off to a real person when a virtual agent is unable to fulfill a task.</figcaption> </figure> </div> Mon, 09 Sep 2019 18:21:44 +0000 leighbryant 161 at Appropriate Magic <span>Appropriate Magic</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 08/26/2019 - 19:45</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><br /> <strong>Problem:
</strong></p> <p>A user wants to be awed, wowed, and amazed by a piece of software. Sidetracking to explain its workings can disrupt the flow of the experience and spoil the fun.</p> <figure><img alt="A screenshot of a conversational interaction with the &quot;WoeBot&quot;" data-entity-type="file" data-entity-uuid="ca7d0aa2-b3e2-49df-82b3-809fba9e333b" src="/sites/default/files/content-images/WoeBot_Appropriate_Magic_Edited_0.png" style="width:100%" /> <figcaption>The "WoeBot" encourages the user to think of it as human-like, to encourage a more natural, personal rapport, without diving into detail about how the AI that supports that persona works.</figcaption> </figure> <p><strong>Solution:
</strong></p> <p>When appropriate, the system should obfuscate the underlying algorithm and instead use playful language to suggest that the system is more than just a math machine—&nbsp;it's magical!&nbsp;</p> <p><strong>Discussion:
</strong></p> <p>This (obviously) is contrary to all other patterns around user education and transparency&nbsp;and so should be handled with care. That said, whatever we're designing&nbsp;we should be making deliberate choices about the presentation layer of an AI app. Even if the UI does little to try and steer the user toward&nbsp;a certain conceptual framework, the user builds a mental model regardless, which may be very different to the designer's&nbsp;intent. Whether it's transparent, magical, or otherwise, designers should see the opportunity to take the initiative in facilitating this mental model and not leave it to chance.&nbsp;
<br /> &nbsp;</p> </div> Mon, 26 Aug 2019 19:45:53 +0000 leighbryant 156 at Demonstrating Thinking <span>Demonstrating Thinking</span> <div> <div>Application</div> <div><a href="/taxonomy/term/16" hreflang="en">Display</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 08/26/2019 - 19:42</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><br /> <strong>Problem:
</strong></p> <p>The user wants to know that the system has conducted a complex calculation. If the calculation occurs too quickly, they might not believe that the calculation has been completed correctly or has been extensive enough in considering all the variables.</p> <figure><img alt="Screenshot of the Articoolo application mid-way through an action" data-entity-type="file" data-entity-uuid="23d49df3-a717-4da7-8e57-19c49556e867" src="/sites/default/files/content-images/Articoolo.com_Progress_Edited_0.png" style="width:100%" /> <figcaption>The Articoolo app uses a progress bar and explanatory text to demonstrate the AI is "at work", so the user is given the sense that "thinking" is happening.</figcaption> </figure> <p><strong>Solution:
</strong></p> <p>The system uses artificial wait times and progress messages to demonstrate that it is "thinking" and applying satisfactory effort to the calculation.&nbsp;</p> <p><strong>Discussion:
</strong></p> <p>While simplicity in an interface is often desirable, in some cases the appearance of complexity may be beneficial, even to the point of simulating some of that complexity. While the reality of a system's intelligence may result in effortless calculations and speedy response times, the perception of a system's intelligence is ironically often associated with the appearance of effort and slower responses.&nbsp;
<br /> &nbsp;</p> </div> Mon, 26 Aug 2019 19:42:46 +0000 leighbryant 151 at Dark Pattern: Faked AI <span>Dark Pattern: Faked AI </span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:47</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know when real AI is deployed and when it's not. They especially don't want to be tricked into believing AI is used to process data when it's actually a manual process with human agents acting behind the scenes.&nbsp;</p> <figure><img alt="Screenshot of an app message saying data will be &quot;extracted automatically by end of the day&quot;, which implies AI but more likely is being done by a human or other non-AI alternative based on the length of time it takes for the operation." data-entity-type="file" data-entity-uuid="e375fb26-c544-4569-91a9-c2c8fc60bb6b" src="/sites/default/files/content-images/Faked_AI-autofyle.png" /> <figcaption>The likelihood that a true AI-powered app would take this time is suspect, but the application does not acknowledge it and users are left feeling uncertain about whether they AI is less powerful than they thought, or if it isn't AI at all but instead relies on human intervention behind the scenes.</figcaption> </figure> <p><strong>Dark pattern response:</strong></p> <p>Whether via a non-learning algorithm masquerading as one capable of machine learning, or via human processing, the system pretends it's using AI when it really isn't.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Users are increasingly literate in what AI-applications are capable of and what they cannot reasonably achieve. So if, for example, your app claims to have advanced optical character recognition (OCR) on borderline illegible items but a twenty-four hour turnaround time for processing, then a savvy user will immediately be suspicious that all is not as it seems and perceive that as a betrayal of trust. The end-user would probably not mind the difference between the two (human intervention vs AI) as long as the expected end result is achieved in a timely fashion. And if, in the example given, a twenty-four lag really is required for AI processing, then designers should anticipate the user's suspicion and address it via <a href="/patterns/21/setting-expectations-acknowledging-limitations">Setting Expectations &amp; Acknowledging Limitations</a>.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 16:47:23 +0000 leighbryant 66 at Anti-Pattern: Mystery Magic <span>Anti-Pattern: Mystery Magic</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:46</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user does not want to be confounded, confused, or otherwise left in the dark as to how an algorithm has generated its result.&nbsp;</p> <figure><img alt="Example of playful and vague language (&quot;Your website totally rocks here!&quot;) used instead of providing context on how the numbers were arrived at in a web score generator." data-entity-type="file" data-entity-uuid="15eb6bc1-2e69-4ce8-9d7d-cf193c06b25f" src="/sites/default/files/content-images/Mystery_Magic-webscore_0.png" /> <figcaption>The results of this AI measurement tool don't expand to show the details of how the outcome was reached, leaving users confused and uncertain about how the results were achieved or what to do next.</figcaption> </figure> <p><strong>Anti-pattern response:</strong></p> <p>The system obfuscates the underlying algorithm and instead uses playful language—or indeed, no language at all—to suggest that the system is more than just a math machine (It's magical!) at inappropriate times.</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>The system explains nothing and hopes the user will either be unbothered by this, or perhaps will even be wowed at the "magic".&nbsp;</p> </div> Wed, 21 Aug 2019 16:46:10 +0000 leighbryant 61 at Explicit Training <span>Explicit Training</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:43</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>When a machine learning system is being trained through the user's repeated inputs, the user wants to know that this is the case.</p> <figure><b><img alt="Example from a photo submission request in an app explaining that the photos added may also be used for other purposes" data-entity-type="file" data-entity-uuid="9ecc7a9e-324b-4e75-8de6-47b8d2c14d56" src="/sites/default/files/content-images/Explicit_Training-Celebslikeme.png" /></b> <figcaption>This lookalike application explicitly informs users that their image will be used for training the image processing services.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system tells the user when it's taking inputs for training purposes, especially if those inputs are unrelated to the user's intentions.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>When the user has a long term relationship with a machine learning software, one in which the software shows an improvement over time, it's important to establish the expectations around improvement up front. It's also useful for mitigating some of the negative impressions that can be formed if the AI is still "in training" and imprecise, acknowledging inaccuracy in a way that makes it clear that improvement can be expected. This also goes to the heart of issues around transparency— if the system is capturing data from the user, then it should communicate why such data is needed. This pattern is especially relevant for chatbots and other conversational UIs, where the user may be involved in a long term, conversation-based relationship with the system.</p> <p dir="ltr">&nbsp;</p> </div> Wed, 21 Aug 2019 16:43:06 +0000 leighbryant 56 at Progressive Processing Display <span>Progressive Processing Display</span> <div> <div>Application</div> <div><a href="/taxonomy/term/16" hreflang="en">Display</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:37</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>While processing an image, the user wants to see the results as quickly as possible, even if that means only seeing part of the image completed.</p> <figure><img alt="Example of an image being loaded in segments to provide clarity sooner" data-entity-type="file" data-entity-uuid="2db162d8-1945-4c80-96be-9e91763adc1f" src="/sites/default/files/content-images/Progressive_Image_Processing_Grid-tensorzoom.png" /> <figcaption>An example of progressive processing display using a progressive image upload, which shows a grid with different sections coming in clearer as the AI processes.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system processes and displays the image progressively, revealing greater detail or more sections as it is able to.</p> <p><br /> <strong>Discussion:</strong></p> <p>A pattern like this mitigates some of the issues around long processing times by providing an initial result in a shorter time, which ideally is enough of a preview for the user to choose to cancel if the output is undesirable. It also aids in transparency of operation by showing the algorithm in action rather than just the final output.<br /> &nbsp;</p> </div> Wed, 21 Aug 2019 16:37:47 +0000 leighbryant 51 at Algorithm Processing Status <span>Algorithm Processing Status </span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:31</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The algorithm may take a while to deliver a result. While this is happening, the user wants to know what is taking place.</p> <figure><img alt="Example from Webscore AI of a progress bar updating status of task while the AI operates" data-entity-type="file" data-entity-uuid="0d78b25d-b172-48a0-b71d-571177665547" src="/sites/default/files/content-images/Algorithm_Processing_Status-webscore-ai-history.png" /> <figcaption>The AI-powered application provides both a progress bar and a more granular explanation for the time it is taking in note-form below the bar.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides granular progress messages during calculations.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>This is important both for the transparency of the system—to reassure the user that the system is operating and hasn't stalled—and to create the impression of an intelligent system with many moving parts. In some cases where there is no actual processing delay, it may even be beneficial to simulate one (as with <a href="/patterns/151/demonstrating-thinking">Demonstrating Thinking</a>). By contrast, sometimes it may undermine the overall impression of the system to expose its workings and over-explain (in which case <a href="/patterns/156/appropriate-magic">Appropriate Magic</a> might be deployed).</p> </div> Wed, 21 Aug 2019 16:31:05 +0000 leighbryant 46 at Privacy PIN <span>Privacy PIN</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:28</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to ensure their data does not fall into the wrong hands if another user has access to their device.</p> <figure><img alt="Example of a privacy pin option notification in a chatbot application" data-entity-type="file" data-entity-uuid="428cf823-3c1e-4df1-945b-8aa5c095f916" src="/sites/default/files/content-images/Privacy_PIN-wysa_0.png" /> <figcaption>An app on a mobile device allows users to set a PIN to help ensure a private communication with the chatbot on a potentially public device.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system allows the user to set up a PIN or similar password to lock down access to part or all of the&nbsp;app and to protect the data accordingly.</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Maintaining data privacy is a multifaceted challenge. It involves both accounting for and reducing unnecessary but legitimate communication (such as capturing and reselling data to third parties), as well as preventing illegitimate extraction from casual bad-faith actors or illegal intrusions from willful criminals. The downside to increasing security is the extra barriers to entry that may see a decrease in adoption or use. In this case, an app which is otherwise not password protected may escalate the security around one part of its functionality when that data is especially sensitive.</p> <p dir="ltr">&nbsp;</p> </div> Wed, 21 Aug 2019 16:28:48 +0000 leighbryant 41 at Privacy Reassurance <span>Privacy Reassurance </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/21/2019 - 16:26</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know what happens to their&nbsp;data when it's&nbsp;captured by the system.</p> <figure><img alt="Example of clear privacy and data sharing practices in the Capturebot AI function from Microsoft" data-entity-type="file" data-entity-uuid="9a6bac43-f02b-462b-bf82-d71dec30715e" src="/sites/default/files/content-images/Privacy_Reassurances-capturebot_1.png" /> <figcaption>Microsoft's AI-powered caption bot has an explanatory box about what happens to the image after uploading outside of the primary captioning function.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system communicates to the user what data it’s capturing, how that data is stored,&nbsp;how long it's stored for, and what other systems the data is being communicated to.&nbsp;</p> <p>&nbsp;</p> <p><strong>Discussion:</strong></p> <p>This is especially important when it comes to data captured based on user behaviour rather than explicit input. Ideally the system captures the least amount of data to achieve its task and stores it for no longer than is necessary, but sometimes data needs to be retained (e.g. for that user to maintain a profile&nbsp;or to train the system—or in some cases, another system—in general). In these cases, a key aspect of transparency is understanding the mechanics of this data capture and storage.&nbsp;</p> <p dir="ltr">&nbsp;</p> <p dir="ltr"><b>Other Examples:</b></p> <figure><img alt="Example of privacy reassurance following a data deletion in an application" data-entity-type="file" data-entity-uuid="67e7e3a3-c4c2-4528-a95f-c0ef92a9324e" src="/sites/default/files/content-images/Privacy_Reassurances-wysa.png" /> <figcaption>In this example, the user is given valuable insight into what happens to their data after a delete.</figcaption> </figure> <p dir="ltr">&nbsp;</p> <p dir="ltr">&nbsp;</p> </div> Wed, 21 Aug 2019 16:26:34 +0000 leighbryant 36 at