Autonomy &amp; Control https://smarterpatterns.com/ en Bot Conversation Starter https://smarterpatterns.com/patterns/96/bot-conversation-starter <span>Bot Conversation Starter</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 04/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">When confronted with an open-ended UI such as the text box of a chatbot, it can be bewildering for the user to know where to start.</p> <figure><img alt="Example of bot conversation starter with a binary yes/no prompt to help guide the user through a natural conversation flow" data-entity-type="file" data-entity-uuid="cccde5da-87ba-419a-8c5f-0a908516d595" src="/sites/default/files/content-images/Bot_Conversation_Starter.png" /> <figcaption>A chatbot application provides a prompt question that helps direct the conversational flow for the user to help guide it through the functions of the app.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>On first use, the bot asks questions that prompt predictable responses from the user, guiding feature discovery and setting user expectations around the user experience to follow.</p> <p><strong>Discussion:</strong></p> <p>Interacting with bots is still a relatively new experience for many, so curating pathways for new users is vital for creating a good first impression.<br /> &nbsp;</p> </div> Wed, 15 Apr 2020 00:00:00 +0000 leighbryant 96 at https://smarterpatterns.com Chat Presets https://smarterpatterns.com/patterns/101/chat-presets <span>Chat Presets</span> <div> <div>Application</div> <div><a href="/taxonomy/term/1" hreflang="en">Behaviour</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 03/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">There are lots of potential conversational paths with a chatbot and sometimes the user will type something weird that breaks the intended conversation flow.</p> <figure><img alt="Example of preset chat prompts that encourage engagement but don't require massive manual input on the part of the user" data-entity-type="file" data-entity-uuid="8c03eae7-581f-4ee4-9385-0dd5ed4f30bf" src="/sites/default/files/content-images/Chat_Presets-wysa_2.png" /> <figcaption>An app offers manual text input, pre-populated short answer suggestions, and suggested options for larger, more informational outputs with minimum user input required.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Just because an app is a chatbot doesn't mean that all responses need to be in the form of text entry. The bot can prompt the user to select a binary yes / no answer, select one from a range of preset answers, grab a slider, or even select an image or other media item as their response.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A UI that blends chatbot text entry with other input mechanisms can be the best of both worlds, allowing the presentation of conventional UI elements at strategic times while remaining embedded in a conversation that provides context and motivation for data entry.<br /> &nbsp;</p> <p dir="ltr"><strong>Other Examples:</strong></p> <figure><b id="docs-internal-guid-90fe46a7-7fff-0a8e-5fa1-4cf6e91fe6d8"><img alt="Example of chat prompt button to encourage engagement without requiring massive manual input" data-entity-type="file" data-entity-uuid="acecdd0f-fa85-474d-8305-91887ef249c2" src="/sites/default/files/content-images/Chat_Presets-youper_0.png" /></b> <figcaption>An application offers pre-set text to help direct conversation and limit chances of breaking the conversational flow.</figcaption> </figure> <p dir="ltr">&nbsp;</p> </div> Sun, 15 Mar 2020 00:00:00 +0000 leighbryant 101 at https://smarterpatterns.com Confirm Configuration https://smarterpatterns.com/patterns/91/confirm-configuration <span>Confirm Configuration </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 01/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system requires the user to provide configuration details in order to work effectively. The user does not want to invest effort into this process.&nbsp;</p> <figure><img alt="Two examples of confirmation messaging prompts in a chatbot conversation flow" data-entity-type="file" data-entity-uuid="d2b20a93-cd06-42c3-8eaf-32188e9443d3" src="/sites/default/files/content-images/Confirm_Configuration.png" /> <figcaption>In these two examples, the system uses AI to offer suggestions and prompts to set its functioning so the user can continue with a minimum of input effort.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Rather than asking for all configuration details upfront, the system starts running on minimal input and sensible defaults, and organically prompts the user to provide more configuration details over time. This can be in the form of confirming defaults, e.g. "It looks like 3:32pm where you are. Is that correct?" or it can be asking for input on unknown variables.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A key difference for many AI-driven apps compared to non-AI is that user experience will evolve over time, as the user becomes more comfortable using the system and as the system gains in accuracy and capability. With this in mind as designers, the focus should be less on fully configuring the system upfront, and more on how it can become increasingly customized to the user's requirements in a staged process. This is especially relevant for a chatbot type app, where configuration questions can slide naturally into other conversations.<br /> &nbsp;</p> </div> Wed, 15 Jan 2020 00:00:00 +0000 leighbryant 91 at https://smarterpatterns.com Criteria Sliders https://smarterpatterns.com/patterns/236/criteria-sliders <span>Criteria Sliders</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>mattiealston</span></span> <span>Thu, 12/12/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>The system makes predictions or recommendations for the user, based on their previous behaviour. The user wants to influence those recommendations through some explicit input.&nbsp;</p> <figure><img alt="A screenshot of a slider from Fontjoy, sliding between &quot;More contrast&quot; and &quot;More similarity&quot;." data-entity-type="file" data-entity-uuid="2bdb44ef-7f21-4004-9e50-fa36d5249651" src="/sites/default/files/content-images/criteria_slider_Fontjoy.png" style="width:100%" /> <figcaption>Sliders such as this example from Fontjoy are a quick and intuitive way for users to express their preferences to the system.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system provides a set of criteria sliders that can apply weighting to the underlying variables, or otherwise tweak them, in such a way that the user can guide the calculation towards preferred outcomes.&nbsp;</p> <p><b>Discussion:</b></p> <p>How much control should the user have over AI predictions? On the one hand, the algorithm may actually be better at anticipating the user's needs than the user themselves and any user input is unnecessary or detrimental for accuracy. On the other hand, if the system is completely autonomous, then the user may feel disempowered and react negatively. And this will vary among users— some will be inclined to be very hands-on and enjoy the sense of control, while others may prefer to put minimal effort into the interaction and trust in the right outcome.</p> </div> Thu, 12 Dec 2019 00:00:00 +0000 mattiealston 236 at https://smarterpatterns.com Crowdsourcing https://smarterpatterns.com/patterns/166/crowdsourcing <span>Crowdsourcing</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 11/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:&nbsp;</strong></p> <p>The system has insufficient or low confidence in the data, it relies on explicit user inputs to make effective, real-time updates to the AI predictions. The user doesn’t want to put a lot of effort into this.</p> <figure><img alt="The Trainline app asks for user input to accurately show seat availability on board." data-entity-type="file" data-entity-uuid="78eb1718-bd6d-405a-a3ae-759afbdfb3c1" src="/sites/default/files/content-images/SmartPattern.gif" style="max-width:350px;width:100%" /> <figcaption>The Trainline app uses crowdsourcing to help make predictions around seat availability, keeping the interactions light so users are more inclined to participate.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides easy input options for users to contribute back (by choice) to the functioning of the AI and highlights the reasons for doing so.</p> <p><strong>Discussion:</strong></p> <p>Many algorithms rely on ongoing user updates to the information they work from in order to be effective. While some do this in the background (think Google Maps or Waze and their dependence on regular traffic updates, largely gleaned from the relative speed of drivers over a distance, to accurately predict travel times), other algorithms may need users to actively give feedback. By making their input both easy to offer and the reasons for doing so clear, the AI systems can bring the user into the process to ensure the outcomes are useful for all users.</p> <p><br /> <em>Pattern submission via <a href="https://www.linkedin.com/in/srutek/">Jan Srutek</a></em><br /> &nbsp;</p> </div> Fri, 15 Nov 2019 00:00:00 +0000 leighbryant 166 at https://smarterpatterns.com Data Deletion Awareness https://smarterpatterns.com/patterns/116/data-deletion-awareness <span>Data Deletion Awareness</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 09/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user wants to have full control over their data. Even after they have submitted it to the system, they may want to delete it.</p> <figure><img alt="Example of infotip regarding how users can delete personal information from an app" data-entity-type="file" data-entity-uuid="ae4ea37f-1e47-4803-8a3b-ab1b41329f95" src="/sites/default/files/content-images/Data_Deletion_Awareness-wysa.png" /> <figcaption>The application provides a clear explanation of how to delete data.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system tells the user how they can delete their data as well as&nbsp;the consequences of doing so, and provides a clear means to proceed with the action.</p> <p><strong>Discussion:</strong></p> <p>Allowing the user to delete their data at any time is an effective way to ensure the user is comfortable providing such data in the first place. Of course, the onus is on the system and organization to act responsibly and genuinely remove the data in question from storage, rather than just hide it in the UI layer.<br /> &nbsp;</p> </div> Sun, 15 Sep 2019 00:00:00 +0000 leighbryant 116 at https://smarterpatterns.com Informed Decisions https://smarterpatterns.com/patterns/71/informed-decisions <span>Informed Decisions</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users choose&nbsp;not to activate a system feature or not to provide some data input which may impact the accuracy, efficiency, or effectiveness of the system. They want to be warned when this will have an effect.</p> <figure><img alt="Example from Ritual of providing user with clear changes to outcomes when a portion of the AI is disabled" data-entity-type="file" data-entity-uuid="6941065e-fe90-4194-8155-bd5c8765d969" src="/sites/default/files/content-images/Informed_Decisions-ritual.png" /> <figcaption>The application shown above explains what happens if a user limits the AI functions around location-based services and how it will hinder some of the outputs.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system warns the user that deactivating a system action, dismissing data, or failing to provide data may have an effect on the outcomes of some actions.</p> <p><strong>Discussion:</strong></p> <p>Of course, we can design the system to warn the user that it needs data for accuracy when in fact we would like to (also) capture the data for other purposes like customer analysis or marketing leads, so this pattern is open to misuse if used in bad faith. Many patterns like this rely on honesty and appropriate use— lies of omission can easily turn otherwise beneficial patterns into coercive dark patterns. &nbsp;<br /> &nbsp;</p> </div> Wed, 15 Aug 2018 00:00:00 +0000 leighbryant 71 at https://smarterpatterns.com Input & Output Comparison https://smarterpatterns.com/patterns/241/input-output-comparison <span>Input &amp; Output Comparison</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Sun, 07/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>When the AI acts on variables that are inputted by the user, the user wants to compare what they originally inputted to the resulting output, in order to better understand how they can achieve the output they desire.</p> <figure><img alt="A screenshot showing input and output panels of GuaGAN app." data-entity-type="file" data-entity-uuid="091119ec-7a31-442f-9f5c-976906fc4e08" src="/sites/default/files/content-images/Input_%26_output_GauGAN.png" style="width:100%" /> <figcaption>The GauGAN app generates photorealistic landscapes from the user's doodles. The input and output comparison is vital to understand how this works.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system presents&nbsp; a summary of the user's&nbsp;input next to the resulting output. As the output is already translating the user’s input into something new, the input feedback itself should closely resemble the user’s input in format, e.g. if they have entered data as variables to be converted into graphics, the input feedback should be variables (not graphics).</p> <p><b>Discussion:</b></p> <p>This pattern assumes that the user is submitting data for processing and waiting on a response — obviously in systems where controls directly affect the output via real time tweaking, as long as those controls are visible (e.g. are not automatically hidden in a collapsed panel once applied), they suffice as input feedback.&nbsp;</p> <p>In more conventional, non-AI applications, it can be redundant to replay the user’s input to them, as their mental model of cause and effect in such cases is strong and they fully grasp how their input relates to output. In AI operations however, the AI can act in unanticipated ways where it is harder for the user to understand what control they have on it, and to know how to guide it towards their desired results. Keeping the input in the same context as the output allows the user to use trial and error exploration over repeated operations to develop their understanding of the system.&nbsp;</p> <p>A similar pattern is Before &amp; After Comparison, and in practice there may be some crossover between these two. That said, there is an important distinction — Before &amp; After Comparison allows a user to inspect two objects to extrapolate the effect of the AI on the processed version and validate its success, whereas Input &amp; Output Comparison is more focused on allowing the user to establish a sense of control over the operation by building a picture of cause and effect.</p> </div> Sun, 15 Jul 2018 00:00:00 +0000 mattiealston 241 at https://smarterpatterns.com Manual Overrides https://smarterpatterns.com/patterns/111/manual-overrides <span>Manual Overrides </span> <div> <div>Application</div> <div><a href="/taxonomy/term/16" hreflang="en">Display</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 06/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><b id="docs-internal-guid-542220bc-7fff-9025-dbfc-808af0dd3365">Problem:</b></p> <p dir="ltr">The system provides one or more AI-generated responses, but the user isn't happy with any of them.</p> <figure><img alt="Example of avatar creation flow featuring AI-generated and designer-generated options for users to choose from" data-entity-type="file" data-entity-uuid="bdcbab76-a3e5-4605-9a48-8a13a7998466" src="/sites/default/files/content-images/Manual_Overrides-avatars.png" /> <figcaption>An avatar-creation flow offers AI-generated solutions as well as pre-populated solutions created by a designer without the aid of AI.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>In addition to dynamically generated AI-responses, the system provides a range of pre-made options not informed by AI to choose from.</p> <p><strong>Discussion:</strong></p> <p>Allowing designers to to pre-make alternate responses can be a useful addition to a system, since they're generally good at anticipating users' needs. There are a number of advantages to this approach: it's a good fallback if the AI fails to produce satisfactory quality; it can be a demonstration of the best of what the AI is capable of; and it allows the user to feel empowered choosing the AI-option knowing that they could otherwise opt out while still proceeding in the process.<br /> &nbsp;</p> </div> Fri, 15 Jun 2018 00:00:00 +0000 leighbryant 111 at https://smarterpatterns.com Motion Tracking Feedback https://smarterpatterns.com/patterns/246/motion-tracking-feedback <span>Motion Tracking Feedback</span> <div> <div>Application</div> <div><a href="/taxonomy/term/31" hreflang="en">System Feedback</a></div> </div> <span><span>mattiealston</span></span> <span>Tue, 05/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>When an algorithm processes the user’s bodily position in real time via pattern recognition, the user wants to understand what the system sees and how they can affect that. &nbsp;</p> <figure><img alt="A screenshot from MoveMirror showing how the user's body is converted to datapoints." data-entity-type="file" data-entity-uuid="9ea20a43-a301-4958-8d4f-8f5ee361bb35" src="/sites/default/files/content-images/Motion_capture_visual_MoveMirror.png" style="width:100%" /> <figcaption>In the Move Mirror application, it is helpful for the user to see what the computer sees so they can explore poses to find different matching images.</figcaption> </figure> <p><b>Solution:</b></p> <p>In an overlay over the video capture, the system shows a simplified representation of the points on the body that it is tracking, often connected in a wireframe model of lines and nodes. This offers an intermediate abstraction between the user’s body and the underlying numerical variables that are actually used by the system.&nbsp;</p> <p><b>Discussion:</b></p> <p>For many AI systems, the key usability challenge is in how the user builds their mental model of what the system is doing. This is especially relevant when the system is capturing something concrete and intuitive from the user (e.g. the position of their body and limbs) and converting that input into intangible datapoint to process it or generate new outputs from it. While the intermediary visual neither really represents how the user thinks about their body nor really manifests the abstraction of variables under the surface of the software, it is an effective vehicle for communicating the translation of body to data that is happening in that moment.&nbsp;</p> </div> Tue, 15 May 2018 00:00:00 +0000 mattiealston 246 at https://smarterpatterns.com