Control https://smarterpatterns.com/ en Criteria Sliders https://smarterpatterns.com/patterns/236/criteria-sliders <span>Criteria Sliders</span> <div> <div>Application</div> <div><a href="/index.php/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>mattiealston</span></span> <span>Thu, 12/12/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/index.php/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem:</b></p> <p>The system makes predictions or recommendations for the user, based on their previous behaviour. The user wants to influence those recommendations through some explicit input.&nbsp;</p> <figure><img alt="A screenshot of a slider from Fontjoy, sliding between &quot;More contrast&quot; and &quot;More similarity&quot;." data-entity-type="file" data-entity-uuid="2bdb44ef-7f21-4004-9e50-fa36d5249651" src="/sites/default/files/content-images/criteria_slider_Fontjoy.png" style="width:100%" /> <figcaption>Sliders such as this example from Fontjoy are a quick and intuitive way for users to express their preferences to the system.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system provides a set of criteria sliders that can apply weighting to the underlying variables, or otherwise tweak them, in such a way that the user can guide the calculation towards preferred outcomes.&nbsp;</p> <p><b>Discussion:</b></p> <p>How much control should the user have over AI predictions? On the one hand, the algorithm may actually be better at anticipating the user's needs than the user themselves and any user input is unnecessary or detrimental for accuracy. On the other hand, if the system is completely autonomous, then the user may feel disempowered and react negatively. And this will vary among users— some will be inclined to be very hands-on and enjoy the sense of control, while others may prefer to put minimal effort into the interaction and trust in the right outcome.</p> </div> Thu, 12 Dec 2019 00:00:00 +0000 mattiealston 236 at https://smarterpatterns.com Data Deletion Awareness https://smarterpatterns.com/patterns/116/data-deletion-awareness <span>Data Deletion Awareness</span> <div> <div>Application</div> <div><a href="/index.php/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 09/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/index.php/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user wants to have full control over their data. Even after they have submitted it to the system, they may want to delete it.</p> <figure><img alt="Example of infotip regarding how users can delete personal information from an app" data-entity-type="file" data-entity-uuid="ae4ea37f-1e47-4803-8a3b-ab1b41329f95" src="/sites/default/files/content-images/Data_Deletion_Awareness-wysa.png" /> <figcaption>The application provides a clear explanation of how to delete data.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system tells the user how they can delete their data as well as&nbsp;the consequences of doing so, and provides a clear means to proceed with the action.</p> <p><strong>Discussion:</strong></p> <p>Allowing the user to delete their data at any time is an effective way to ensure the user is comfortable providing such data in the first place. Of course, the onus is on the system and organization to act responsibly and genuinely remove the data in question from storage, rather than just hide it in the UI layer.<br /> &nbsp;</p> </div> Sun, 15 Sep 2019 00:00:00 +0000 leighbryant 116 at https://smarterpatterns.com Emergent Metrics https://smarterpatterns.com/patterns/211/emergent-metrics <span>Emergent Metrics</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>mattiealston</span></span> <span>Mon, 04/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>Where there are items to search and sort, the user wants the most valuable items to be surfaced, via a deep insight into what they value and how that value is manifested in the data.&nbsp;</p> <figure><img alt="A screenshot showing Microsoft Academic search utilizing their own &quot;Salience&quot; metric." data-entity-type="file" data-entity-uuid="f8289860-c5cd-470e-98e9-c2985c45d54a" src="/sites/default/files/content-images/emergent_metrics_Microsoft_Academic.png" style="width:100%" /> <figcaption>The specialist Microsoft Academic search tool for academic papers uses the emergent metric "Saliency" (based on citations with an algorithmic weighting) to return more valuable results to the user than "Relevancy" or "Citations" can.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system deploys new metrics based on emergent properties of the specific data used by it, that are more insightful, more meaningful, and better model the users intent in interrogating the data. These metrics can be used to sort items and assess the relative value of one item to another in a list.</p> <p><b>Discussion:</b></p> <p>While we always appreciate the benefits of providing data to the user, often we overlook the opportunities to innovate when it comes to parsing and processing that data. As systems grow increasingly intelligent, rather than simply providing the user with raw variables to make sense of, we should think about how the system can do that work itself, providing the user with the most valuable results through a deeper understanding of the content of the data and the user’s relationship with it. This often already takes place behind the scenes (e.g. the algorithm that determines which search results Google thinks are most relevant for each user) but can be opaque and appear to be <a href="/patterns/61/anti-pattern-mystery-magic">Mystery Magic</a>. Using emergent metrics explicitly exposes this power to the user, helps them build their conceptual model of the system, and allows them to choose whether to use it or not. And where a piece of software has a particular strength in processing data, attaching that feature to an emergent metric can also help champion that strength as part of a unique value proposition.&nbsp;</p> </div> Mon, 15 Apr 2019 00:00:00 +0000 mattiealston 211 at https://smarterpatterns.com Informed Decisions https://smarterpatterns.com/patterns/71/informed-decisions <span>Informed Decisions</span> <div> <div>Application</div> <div><a href="/index.php/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 08/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/index.php/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users choose&nbsp;not to activate a system feature or not to provide some data input which may impact the accuracy, efficiency, or effectiveness of the system. They want to be warned when this will have an effect.</p> <figure><img alt="Example from Ritual of providing user with clear changes to outcomes when a portion of the AI is disabled" data-entity-type="file" data-entity-uuid="6941065e-fe90-4194-8155-bd5c8765d969" src="/sites/default/files/content-images/Informed_Decisions-ritual.png" /> <figcaption>The application shown above explains what happens if a user limits the AI functions around location-based services and how it will hinder some of the outputs.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system warns the user that deactivating a system action, dismissing data, or failing to provide data may have an effect on the outcomes of some actions.</p> <p><strong>Discussion:</strong></p> <p>Of course, we can design the system to warn the user that it needs data for accuracy when in fact we would like to (also) capture the data for other purposes like customer analysis or marketing leads, so this pattern is open to misuse if used in bad faith. Many patterns like this rely on honesty and appropriate use— lies of omission can easily turn otherwise beneficial patterns into coercive dark patterns. &nbsp;<br /> &nbsp;</p> </div> Wed, 15 Aug 2018 00:00:00 +0000 leighbryant 71 at https://smarterpatterns.com Opt In / Out Toggle https://smarterpatterns.com/patterns/76/opt-out-toggle <span>Opt In / Out Toggle </span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Thu, 03/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>The user wants to choose when to activate machine learning or other AI functions, rather than have them active by default.</p> <figure><img alt="Example of toggle on and off options for different features in an app from Ada" data-entity-type="file" data-entity-uuid="3e831fb2-7196-453d-9065-0a8ffae153c8" src="/sites/default/files/content-images/Opt_In_Toggle-ada.png" /> <figcaption>Users can select whether to use certain elements of the AI.</figcaption> </figure> <p>&nbsp;</p> <p><strong>Solution:</strong></p> <p>The system communicates the benefits of such functionality when it's available, and allows the user to opt in, or fall back to a non-ML/AI version.</p> <p><strong>Discussion:</strong></p> <p>Even when the benefits of ML and AI are obvious, user adoption may still stall if users are wary of such technologies. Empowering the user to actively choose to use these features can alleviate some anxiety. Providing a clear return path to opt out is also vital for maintaining that sense of empowerment, especially during the period the user moves from a cautious trial phase to being comfortable with these features.&nbsp;<br /> &nbsp;</p> </div> Thu, 15 Mar 2018 00:00:00 +0000 leighbryant 76 at https://smarterpatterns.com Privacy PIN https://smarterpatterns.com/patterns/41/privacy-pin <span>Privacy PIN</span> <div> <div>Application</div> <div><a href="/taxonomy/term/11" hreflang="en">Control</a></div> </div> <span><span>leighbryant</span></span> <span>Thu, 02/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to ensure their data does not fall into the wrong hands if another user has access to their device.</p> <figure><img alt="Example of a privacy pin option notification in a chatbot application" data-entity-type="file" data-entity-uuid="428cf823-3c1e-4df1-945b-8aa5c095f916" src="/sites/default/files/content-images/Privacy_PIN-wysa_0.png" /> <figcaption>An app on a mobile device allows users to set a PIN to help ensure a private communication with the chatbot on a potentially public device.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system allows the user to set up a PIN or similar password to lock down access to part or all of the&nbsp;app and to protect the data accordingly.</p> <p><strong>Discussion:</strong></p> <p>Maintaining data privacy is a multifaceted challenge. It involves both accounting for and reducing unnecessary but legitimate communication (such as capturing and reselling data to third parties), as well as preventing illegitimate extraction from casual bad-faith actors or illegal intrusions from willful criminals. The downside to increasing security is the extra barriers to entry that may see a decrease in adoption or use. In this case, an app which is otherwise not password protected may escalate the security around one part of its functionality when that data is especially sensitive.</p> <p dir="ltr">&nbsp;</p> </div> Thu, 15 Feb 2018 00:00:00 +0000 leighbryant 41 at https://smarterpatterns.com