System Feedback https://smarterpatterns.com/taxonomy/term/31 en Algorithm Processing Status https://smarterpatterns.com/patterns/46/algorithm-processing-status <span>Algorithm Processing Status </span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 11/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The algorithm may take a while to deliver a result. While this is happening, the user wants to know what is taking place.</p> <figure><img alt="Example from Webscore AI of a progress bar updating status of task while the AI operates" data-entity-type="file" data-entity-uuid="0d78b25d-b172-48a0-b71d-571177665547" src="/sites/default/files/content-images/Algorithm_Processing_Status-webscore-ai-history.png" /> <figcaption>The AI-powered application provides both a progress bar and a more granular explanation for the time it is taking in note-form below the bar.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides granular progress messages during calculations.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>This is important both for the transparency of the system—to reassure the user that the system is operating and hasn't stalled—and to create the impression of an intelligent system with many moving parts. In some cases where there is no actual processing delay, it may even be beneficial to simulate one (as with <a href="/patterns/151/demonstrating-thinking">Demonstrating Thinking</a>). By contrast, sometimes it may undermine the overall impression of the system to expose its workings and over-explain (in which case <a href="/patterns/156/appropriate-magic">Appropriate Magic</a> might be deployed).</p> </div> Sun, 15 Nov 2020 00:00:00 +0000 leighbryant 46 at https://smarterpatterns.com Before & After Comparison https://smarterpatterns.com/patterns/196/after-comparison <span>Before &amp; After Comparison</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Fri, 05/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>The user has submitted an artifact for processing, be it an image, sound clip, or video file, and wants to understand fully what effect the processing has had.&nbsp;</p> <figure><img alt="The &quot;Inpainting&quot; demo app shows a &quot;Before&quot; and &quot;After&quot; image in the UI" data-entity-type="file" data-entity-uuid="63b04bb5-a968-4ccb-af99-6f32ab090a8e" src="/sites/default/files/content-images/Before_%26_after_Inpainting_0.png" style="width:100%" /> <figcaption>This demo from Nvidia is so convincing that you might not even believe the "After" image had been edited if you couldn't compare it to the "Before"&nbsp;</figcaption> </figure> <p><b>Solution:</b></p> <p>The system presents the user with their initial artifact beside the processed version for comparison. On a desktop app, this may mean a simple split display. On a mobile app where screen real-estate is more of a concern, this could take the form of a slider that allows the user to swipe between before and after images, or a similar interactive device.&nbsp;</p> <p><b>Discussion:</b></p> <p>Although a very straightforward pattern, it is one that can be&nbsp;overlooked by designers. Losing sight (or sound) of the original content&nbsp;can present serious problems for the user in evaluating the effectiveness of the processing, or, if the effect is subtle, to even verify that processing has correctly taken place. The immediacy of comparison is key to reducing the cognitive load required of the user.&nbsp;In many cases even when the original content is presented with the processed version, the user is required to click in and out of each version via preview thumbnails, which makes direct comparison problematic. Likewise for sound clips or video, being able to quickly swap between before and after clips as they play (without restarting) aids the immediacy of comparison.</p> </div> Fri, 15 May 2020 00:00:00 +0000 mattiealston 196 at https://smarterpatterns.com Confidence Status https://smarterpatterns.com/patterns/31/confidence-status <span>Confidence Status</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Sat, 02/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:&nbsp;</strong></p> <p>Users want to know how accurate results are when the system presents the results of a calculation.</p> <figure><img alt="A screenshot from the Trainline app showing likely seat availabilities based on AI" data-entity-type="file" data-entity-uuid="02639035-127e-4575-8b18-112d0ddc5741" src="/sites/default/files/content-images/Pattern_Confidence%20Status_Trainline%20BusyBot_Oct19.png" style="width:100%" /> <figcaption>The wording “Seats may be available” was chosen carefully and deliberately to suggest there is a degree of uncertainty in the train busy-ness prediction.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides a visual indication of the confidence status of the calculation (as a percentage, by graphical indicator, or through another format).&nbsp;</p> <p><strong>Discussion:</strong></p> <p>When the operation of an algorithm is opaque to the user, it is hard for that user to assess how much trust to place in the calculation. Transparency in the algorithm is vital for the user to build a relationship of trust in the system. This transparency includes not just details of how the algorithm arrives at its results, but also honesty about its shortcomings. Displaying a confidence status allows the user to differentiate between reliable and unreliable outcomes, and to make more informed choices. It also works to mitigate the negative impression around inaccurate results and to amplify the feeling of success around accurate ones.<br /> <br /> <br /> <b>Other Examples:</b><br /> &nbsp;</p> <figure><img alt="A screenshot of the app Celebs Like Me showing confidence of match" data-entity-type="file" data-entity-uuid="6d940ef6-4971-40dd-b410-67b5e5e24a1a" src="/sites/default/files/content-images/Confidence_Intervals-celebslikeme_0.png" style="width:100%" /> <figcaption>A lookalike app displays a confidence status with each potential match to show how similar the system thinks its AI prediction is.</figcaption> </figure> <figure><b><img alt="Example of image recognition accuracy metrics (.93 and .98 on the images displayed of a laptop and a bottle respectively) to give users a sense of how closely the results match the system's parameters" data-entity-type="file" data-entity-uuid="2e8be7f7-e5c1-4bf4-a868-2e4fc69a0896" src="/sites/default/files/content-images/Confidence_Intervals-bingsearch.png" /></b> <figcaption>Another image recognition AI app displays a score for certainty of assessment.</figcaption> </figure> </div> Sat, 15 Feb 2020 00:00:00 +0000 leighbryant 31 at https://smarterpatterns.com Explicit Training https://smarterpatterns.com/patterns/56/explicit-training <span>Explicit Training</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Tue, 01/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>When a machine learning system is being trained through the user's repeated inputs, the user wants to know that this is the case.</p> <figure><b><img alt="Example from a photo submission request in an app explaining that the photos added may also be used for other purposes" data-entity-type="file" data-entity-uuid="9ecc7a9e-324b-4e75-8de6-47b8d2c14d56" src="/sites/default/files/content-images/Explicit_Training-Celebslikeme.png" /></b> <figcaption>This lookalike application explicitly informs users that their image will be used for training the image processing services.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system tells the user when it's taking inputs for training purposes, especially if those inputs are unrelated to the user's intentions.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>When the user has a long term relationship with a machine learning software, one in which the software shows an improvement over time, it's important to establish the expectations around improvement up front. It's also useful for mitigating some of the negative impressions that can be formed if the AI is still "in training" and imprecise, acknowledging inaccuracy in a way that makes it clear that improvement can be expected. This also goes to the heart of issues around transparency— if the system is capturing data from the user, then it should communicate why such data is needed. This pattern is especially relevant for chatbots and other conversational UIs, where the user may be involved in a long term, conversation-based relationship with the system.</p> <p dir="ltr">&nbsp;</p> </div> Tue, 15 Jan 2019 00:00:00 +0000 leighbryant 56 at https://smarterpatterns.com Input & Output Comparison https://smarterpatterns.com/patterns/241/input-output-comparison <span>Input &amp; Output Comparison</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Sun, 07/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>When the AI acts on variables that are inputted by the user, the user wants to compare what they originally inputted to the resulting output, in order to better understand how they can achieve the output they desire.</p> <figure><img alt="A screenshot showing input and output panels of GuaGAN app." data-entity-type="file" data-entity-uuid="091119ec-7a31-442f-9f5c-976906fc4e08" src="/sites/default/files/content-images/Input_%26_output_GauGAN.png" style="width:100%" /> <figcaption>The GauGAN app generates photorealistic landscapes from the user's doodles. The input and output comparison is vital to understand how this works.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system presents&nbsp; a summary of the user's&nbsp;input next to the resulting output. As the output is already translating the user’s input into something new, the input feedback itself should closely resemble the user’s input in format, e.g. if they have entered data as variables to be converted into graphics, the input feedback should be variables (not graphics).</p> <p><b>Discussion:</b></p> <p>This pattern assumes that the user is submitting data for processing and waiting on a response — obviously in systems where controls directly affect the output via real time tweaking, as long as those controls are visible (e.g. are not automatically hidden in a collapsed panel once applied), they suffice as input feedback.&nbsp;</p> <p>In more conventional, non-AI applications, it can be redundant to replay the user’s input to them, as their mental model of cause and effect in such cases is strong and they fully grasp how their input relates to output. In AI operations however, the AI can act in unanticipated ways where it is harder for the user to understand what control they have on it, and to know how to guide it towards their desired results. Keeping the input in the same context as the output allows the user to use trial and error exploration over repeated operations to develop their understanding of the system.&nbsp;</p> <p>A similar pattern is Before &amp; After Comparison, and in practice there may be some crossover between these two. That said, there is an important distinction — Before &amp; After Comparison allows a user to inspect two objects to extrapolate the effect of the AI on the processed version and validate its success, whereas Input &amp; Output Comparison is more focused on allowing the user to establish a sense of control over the operation by building a picture of cause and effect.</p> </div> Sun, 15 Jul 2018 00:00:00 +0000 mattiealston 241 at https://smarterpatterns.com Motion Tracking Feedback https://smarterpatterns.com/patterns/246/motion-tracking-feedback <span>Motion Tracking Feedback</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Tue, 05/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>When an algorithm processes the user’s bodily position in real time via pattern recognition, the user wants to understand what the system sees and how they can affect that. &nbsp;</p> <figure><img alt="A screenshot from MoveMirror showing how the user's body is converted to datapoints." data-entity-type="file" data-entity-uuid="9ea20a43-a301-4958-8d4f-8f5ee361bb35" src="/sites/default/files/content-images/Motion_capture_visual_MoveMirror.png" style="width:100%" /> <figcaption>In the Move Mirror application, it is helpful for the user to see what the computer sees so they can explore poses to find different matching images.</figcaption> </figure> <p><b>Solution:</b></p> <p>In an overlay over the video capture, the system shows a simplified representation of the points on the body that it is tracking, often connected in a wireframe model of lines and nodes. This offers an intermediate abstraction between the user’s body and the underlying numerical variables that are actually used by the system.&nbsp;</p> <p><b>Discussion:</b></p> <p>For many AI systems, the key usability challenge is in how the user builds their mental model of what the system is doing. This is especially relevant when the system is capturing something concrete and intuitive from the user (e.g. the position of their body and limbs) and converting that input into intangible datapoint to process it or generate new outputs from it. While the intermediary visual neither really represents how the user thinks about their body nor really manifests the abstraction of variables under the surface of the software, it is an effective vehicle for communicating the translation of body to data that is happening in that moment.&nbsp;</p> </div> Tue, 15 May 2018 00:00:00 +0000 mattiealston 246 at https://smarterpatterns.com Object Identification Feedback https://smarterpatterns.com/patterns/251/object-identification-feedback <span>Object Identification Feedback</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Sun, 04/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>When a system uses visual pattern recognition to assign data to an image or video, the user wants to understand how the system sees the world.</p> <figure><img alt="A screenshot of objects being identified by an app, in this case a soda bottle on a laptop." data-entity-type="file" data-entity-uuid="cd35c120-2ce4-416a-a6bc-9a51d5aebad3" src="/sites/default/files/content-images/Object_identification.png" style="width:100%" /> <figcaption>Object identification as seen in demo of Tensorflow functionality.</figcaption> </figure> <p><b>Solution:</b></p> <p>In an overlay over the image or video capture, the system shows a bounding box around the objects that it identifies, with a corresponding label. It can also display a Confidence Status attached to each object to indicate the probability of a correct match.&nbsp;</p> <p><b>Discussion:</b></p> <p>Although currently the desire for this feature is often driven by the simple curiosity to “see how the computer sees”, the mapping of data to objects in the world will be a core feature of the pervasive computing landscape in future—&nbsp;especially if we assume the prevalence of AR interfaces in that world. In developing this pattern, it is not just a case of providing elegant solutions for task completion, but also refining a visual language that is likely to affect the way we conceive of the environment around us for generations to come.&nbsp;</p> </div> Sun, 15 Apr 2018 00:00:00 +0000 mattiealston 251 at https://smarterpatterns.com Progressive Feature Reveal https://smarterpatterns.com/patterns/106/progressive-feature-reveal <span>Progressive Feature Reveal </span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 12/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system can do a lot of interesting things. But the user doesn't want to be bombarded with information learning about them all at once.<br /> &nbsp;</p> <figure><img alt="Example of progressive feature reveal, prompting the user to take next steps in an application by explaining as the need arises" data-entity-type="file" data-entity-uuid="2bf7caa8-e2e9-42a4-a643-6813e98d0a01" src="/sites/default/files/content-images/Progressive_Feature_Reveal-youper.png" /> <figcaption>An app offers progressively more information as the user gets further into the system's set up flow.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system slowly reveals features over the first phase of use.</p> <p><strong>Discussion:</strong></p> <p>As with <a href="/patterns/91/confirm-configuration">Confirm Configuration</a> and similar patterns, the designer should think of the features of a piece of software not just extending across a UI, but also in time. Not only what is displayed where, but what is displayed when and on what trigger? Whole sections of the app may initially be hidden and only activate later when relevant.<br /> &nbsp;</p> </div> Fri, 15 Dec 2017 00:00:00 +0000 leighbryant 106 at https://smarterpatterns.com Training Progress Indicator https://smarterpatterns.com/patterns/221/training-progress-indicator <span>Training Progress Indicator</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Wed, 02/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>When the algorithm relies on&nbsp; a particular user to train it in order to perform at optimal effectiveness, the user wants to know how much of the training has been completed.</p> <figure><img alt="A screenshot from the Boomy app showing a progress bar indicating percentage of training complete." data-entity-type="file" data-entity-uuid="0d4190b6-9ec9-4529-b9e6-5d4a22f448e3" src="/sites/default/files/content-images/Training_progress_Boomy.png" style="width:100%" /> <figcaption>In the generative music app "Boomy", the system needs to be trained to understand the user's preferences. The progress bar provides vital feedback on this.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system displays a progress bar, percentage complete, or similar device, to show how far the system is through the training process.</p> <p><b>Discussion:</b></p> <p>Many AI systems will be trained en masse, either behind the scenes with datasets curated by the designers or by collecting aggregate user data in live use, but some applications rely on each individual user training the system to complete their particular tasks. The&nbsp;system has to learn from their actual inputted material or own unique preferences. In these cases, there could be a separate training mode prior to task mode being unlocked, or alternatively the system trains through actual use, increasing its effectiveness through repeated operations. In either case, it is beneficial to communicate to the user how far they are through the process.&nbsp;</p> </div> Wed, 15 Feb 2017 00:00:00 +0000 mattiealston 221 at https://smarterpatterns.com Anti-Pattern: Content Flagging Biases https://smarterpatterns.com/patterns/311/anti-pattern-content-flagging-biases <span>Anti-Pattern: Content Flagging Biases</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Sat, 08/15/2015 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p><b>Problem:</b></p> <p>Users have a range of different opinions and express them in different ways. Where machine learning content detection is used to flag and hide offensive comments left online, users want their comments treated as fairly as anyone else’s.</p> <figure><img alt="A screenshot of a demo from Perspective, demonstrating the content filtering feature." data-entity-type="file" data-entity-uuid="a9c19573-63d9-4ab1-85a4-165d562e7153" src="/sites/default/files/content-images/Content_filter_bias_Perspective.png" style="width:100%" /> <figcaption>Automatically filtering out offensive comments is certainly beneficial, as per this demo from Perspective. The question is, does it treat everyone fairly?</figcaption> </figure> <p><b>Anti-pattern response:</b></p> <p>The content filter flags up some reasonable comments as toxic while letting through other&nbsp;genuinely toxic comments, due to its inability to understand the meaning behind the comment or through other biases designed into the system. At best this is randomly unfair, but at worst&nbsp;this bias aligns with political orientations, privileging one group and discriminating against another.&nbsp;</p> <p><b>Discussion:</b></p> <p>While the filtering of offensive content and policing of toxic behaviour online is badly needed, few organizations are willing to invest in the workforce required to manually moderate these spaces. As such, tools that automate the process could be revolutionary in improving the everyday experience for people online. Unfortunately, as is common in AI and ML systems, it is easy to accidentally or inconsiderately replicate existing biases that leave some people feeling like they are treated unfairly.&nbsp;</p> <p>For example, from the tech demo of Perspective, an experimental ML content flagging system project from Jigsaw and Google, the following comments about the 2016 US election results were flagged as toxic:<br /> <br /> “It was terrible. Both sides suck, but Trump REALLY is scary.”<br /> “You are ignorant or do not care about the rights of minority populations, women, and non-cis Americans.”<br /> “Please put yourself in the shoes of women, minorities, and LGBT people.”</p> <p>On the other hand, the following were passed as non-toxic:</p> <p>“Your [sic] a socialist snowflake!”<br /> “Great. We need our country back!”<br /> “Make America Great Again!”</p> <p>While this latter group might not use profane language, they are well established sayings understood to express triumphalism over political opponents, intimidate minorities, and be generally divisive.&nbsp;</p> <p>The question is then, by what measure is a comment “toxic”? Can further training help tweak this and create a fairer system? Or is it inevitable that the system appears to be unfair to one party or another?</p> </div> Sat, 15 Aug 2015 00:00:00 +0000 mattiealston 311 at https://smarterpatterns.com