Input Data en Confirm Configuration <span>Confirm Configuration </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Wed, 01/15/2020 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system requires the user to provide configuration details in order to work effectively. The user does not want to invest effort into this process.&nbsp;</p> <figure><img alt="Two examples of confirmation messaging prompts in a chatbot conversation flow" data-entity-type="file" data-entity-uuid="d2b20a93-cd06-42c3-8eaf-32188e9443d3" src="/sites/default/files/content-images/Confirm_Configuration.png" /> <figcaption>In these two examples, the system uses AI to offer suggestions and prompts to set its functioning so the user can continue with a minimum of input effort.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Rather than asking for all configuration details upfront, the system starts running on minimal input and sensible defaults, and organically prompts the user to provide more configuration details over time. This can be in the form of confirming defaults, e.g. "It looks like 3:32pm where you are. Is that correct?" or it can be asking for input on unknown variables.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>A key difference for many AI-driven apps compared to non-AI is that user experience will evolve over time, as the user becomes more comfortable using the system and as the system gains in accuracy and capability. With this in mind as designers, the focus should be less on fully configuring the system upfront, and more on how it can become increasingly customized to the user's requirements in a staged process. This is especially relevant for a chatbot type app, where configuration questions can slide naturally into other conversations.<br /> &nbsp;</p> </div> Wed, 15 Jan 2020 00:00:00 +0000 leighbryant 91 at Crowdsourcing <span>Crowdsourcing</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 11/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:&nbsp;</strong></p> <p>The system has insufficient or low confidence in the data, it relies on explicit user inputs to make effective, real-time updates to the AI predictions. The user doesn’t want to put a lot of effort into this.</p> <figure><img alt="The Trainline app asks for user input to accurately show seat availability on board." data-entity-type="file" data-entity-uuid="78eb1718-bd6d-405a-a3ae-759afbdfb3c1" src="/sites/default/files/content-images/SmartPattern.gif" style="max-width:350px;width:100%" /> <figcaption>The Trainline app uses crowdsourcing to help make predictions around seat availability, keeping the interactions light so users are more inclined to participate.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system provides easy input options for users to contribute back (by choice) to the functioning of the AI and highlights the reasons for doing so.</p> <p><strong>Discussion:</strong></p> <p>Many algorithms rely on ongoing user updates to the information they work from in order to be effective. While some do this in the background (think Google Maps or Waze and their dependence on regular traffic updates, largely gleaned from the relative speed of drivers over a distance, to accurately predict travel times), other algorithms may need users to actively give feedback. By making their input both easy to offer and the reasons for doing so clear, the AI systems can bring the user into the process to ensure the outcomes are useful for all users.</p> <p><br /> <em>Pattern submission via <a href="">Jan Srutek</a></em><br /> &nbsp;</p> </div> Fri, 15 Nov 2019 00:00:00 +0000 leighbryant 166 at Data Being Shared Flag <span>Data Being Shared Flag</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>mattiealston</span></span> <span>Tue, 10/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>When the system communicates data to third parties, the user wants to know what data is being shared and what is not.</p> <figure><img alt="A screenshot from the Youper app showing a set-up page with a label reading &quot;Private - not shared&quot;." data-entity-type="file" data-entity-uuid="d4bd086b-1edc-4fe4-ba38-00a9f82a6d1e" src="/sites/default/files/content-images/Data_being_shared_flag.png" style="width:100%" /> <figcaption>As part of the set-up process, Youper flags what data is being shared.</figcaption> </figure> <p><b>Solution:</b></p> <p>The system flags inputs to show this&nbsp;or otherwise tells the user what data is being shared and to whom.</p> <p><b>Discussion:</b></p> <p>Transparency shouldn't end with a vague story about data generically being stored and shared, rather it should be explicit about what particular data is being used for any particular purpose. Similarly, when data is not being shared, the user will also be reassured to know this.</p> </div> Tue, 15 Oct 2019 00:00:00 +0000 mattiealston 266 at Explaining Reductive Inputs <span>Explaining Reductive Inputs</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 03/15/2019 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p><br /> <strong>Problem:</strong></p> <p>The user wants the&nbsp;system to respect their identity and not force them to conform to an identity that doesn't match their own simply to use the app.&nbsp;</p> <figure><img alt="Example from Ada of a binary gender input requirement and follow-on explanation of why the limits are in place" data-entity-type="file" data-entity-uuid="ff12460a-b778-4432-a078-bf233aa5ba9c" src="/sites/default/files/content-images/Explaining_Reductive_Inputs-ada.png" /> <figcaption>By providing an explanation of why it requires a binary response, this health app acknowledges the identity of the individuals using it even when it has to limit them within the application.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>If the system does ask the user to provide a binary choice around identity—e.g. if a health app asks a user to choose between male and female—then the system explains why it necessary to reduce the options to only these choices, and qualifies what the implication of each is.<br /> <br /> <strong>Discussion:</strong></p> <p>While ideally any app allows for the user to choose any gender, medical diagnosis normally requires a choice between binary male or female. The UI is not really the best place to discuss issues around the relationship between gender and sex and the social construction of both. Explaining the necessity of asking a reductive question in such a way as to make it clear that the system empathizes with the users concerns is tricky, yet it must try to do its best.<br /> &nbsp;</p> </div> Fri, 15 Mar 2019 00:00:00 +0000 leighbryant 141 at Inclusive Voice Recognition <span>Inclusive Voice Recognition </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>mattiealston</span></span> <span>Sat, 09/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/46" hreflang="en">Fairness &amp; Inclusiveness</a></div> </div> <div><p><b>Problem:</b></p> <p>Users with speech impairments would like to use voice interfaces, but the software often fails to recognize their speech.&nbsp;</p> <figure><img alt="A screenshot from a YouTube video introducing Google's Project Euphonia." data-entity-type="file" data-entity-uuid="f002fda1-d28b-4de9-9f6c-0a4d1836b7c2" src="/sites/default/files/content-images/Inclusive_voice_interface_Project_Euphonia.png" style="width:100%" /> <figcaption>Google's Project Euphonia provides voice recognition to users with speech impairments.&nbsp;</figcaption> </figure> <p><b>Solution:</b></p> <p>The AI that powers an inclusive voice interface is trained with voices from users with speech impairments so that it can understand their inputs. This is achieved to a baseline level through collective training, in addition to training for each individual so as to optimize the system for their personal speech patterns.&nbsp;</p> <p><b>Discussion:</b></p> <p>Accessibility issues are well understood when it comes to traditional web applications, and the need for interfaces that are accessible to users with vision loss or physical impairments, for example, is widely acknowledged (even if fully accessible UIs are not provided as widely as they should be). By comparison, it can be easy to overlook the issues that users may have with new emergent interfaces, especially those that rely on AI or ML.&nbsp;Unlike the lack of alt text for screen readers, for instance, there are few indications to the able-bodied tester that the interface will fail for some users. Voice interfaces are a very useful case study in that regard — it is only through inclusive interfaces like Voiceitt or Google’s Project Euphonia, that we are made aware of how inadequate our standard provision may be.&nbsp;</p> <p>Going further, we should not just consider how these projects can make ordinary software accessible to all users,&nbsp;we should embrace opportunities to leverage the unique properties of these emergent interfaces in order significantly improve the quality of life for people with speech impairments, such as those with cerebral palsy, amyotrophic lateral sclerosis (ALS), Parkinson’s Disease, brain cancer, or traumatic brain injury.</p> </div> Sat, 15 Sep 2018 00:00:00 +0000 mattiealston 296 at Privacy Reassurance <span>Privacy Reassurance </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Mon, 01/15/2018 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><strong>Problem:</strong></p> <p>Users want to know what happens to their&nbsp;data when it's&nbsp;captured by the system.</p> <figure><img alt="Example of clear privacy and data sharing practices in the Capturebot AI function from Microsoft" data-entity-type="file" data-entity-uuid="9a6bac43-f02b-462b-bf82-d71dec30715e" src="/sites/default/files/content-images/Privacy_Reassurances-capturebot_1.png" /> <figcaption>Microsoft's AI-powered caption bot has an explanatory box about what happens to the image after uploading outside of the primary captioning function.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>The system communicates to the user what data it’s capturing, how that data is stored,&nbsp;how long it's stored for, and what other systems the data is being communicated to.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>This is especially important when it comes to data captured based on user behaviour rather than explicit input. Ideally the system captures the least amount of data to achieve its task and stores it for no longer than is necessary, but sometimes data needs to be retained (e.g. for that user to maintain a profile&nbsp;or to train the system—or in some cases, another system—in general). In these cases, a key aspect of transparency is understanding the mechanics of this data capture and storage.&nbsp;</p> <p dir="ltr"><br /> <b>Other Examples:</b></p> <figure><img alt="Example of privacy reassurance following a data deletion in an application" data-entity-type="file" data-entity-uuid="67e7e3a3-c4c2-4528-a95f-c0ef92a9324e" src="/sites/default/files/content-images/Privacy_Reassurances-wysa.png" /> <figcaption>In this example, the user is given valuable insight into what happens to their data after a delete.</figcaption> </figure> <p dir="ltr">&nbsp;</p> </div> Mon, 15 Jan 2018 00:00:00 +0000 leighbryant 36 at Qualitative Feedback for Training <span>Qualitative Feedback for Training</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Sun, 10/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><strong>Problem:</strong></p> <p>When a system being trained by a user is not producing adequate outputs, more information is required to understand what's not working as expected or desired. The user does not want to invest a lot of effort in this process.</p> <figure><img alt="An example of a low-effort feedback option with prompts (multiple choice inputs and open text entry field) in a user app" data-entity-type="file" data-entity-uuid="98ead399-30da-4b54-b67c-e6e2c0bb264e" src="/sites/default/files/content-images/Qualitative_Feedback_For_Training-ada.png" /> <figcaption>In the two images above, a user is given the option to provide feedback with the simple push of a button, and then the application makes feedback easy by auto-generating some options as well as providing an open text field for more information if desired.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>Triggered by a negative response or rating, the system prompts the user to provide qualitative feedback in the form of an explanation of what is unsatisfactory. This may be free text input or selection from preset options, and is usually done in combination with <a href="/patterns/81/quantitative-feedback-training">Quantitative Feedback for Training</a>.</p> <p><strong>Discussion:</strong></p> <p>As the effort required here is relatively high for the user with no immediate reward, care should be taken to motivate the user accordingly. This could be the system advising that this feedback will help the system improve and thus establish the benefit to the user. Or the motivation could simply be a pleasant and rewarding interaction, e.g. an interesting conversation point with a chatbot.<br /> &nbsp;</p> </div> Sun, 15 Oct 2017 00:00:00 +0000 leighbryant 86 at Quantitative Feedback for Training <span>Quantitative Feedback for Training </span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>leighbryant</span></span> <span>Fri, 09/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p dir="ltr"><strong>Problem:</strong></p> <p dir="ltr">The system needs to be trained by the user. The user does not want to invest a lot of effort in this process.</p> <figure><img alt="Example of a low-effort quantitive feedback option (thumbs up or thumbs down) included in a conversational flow with a chatbot" data-entity-type="file" data-entity-uuid="3b776fce-3e85-4264-b97b-2ceb3832121a" src="/sites/default/files/content-images/Quantitative_Feedback_For_Training-Daisy.png" /> <figcaption>By prompting for a quick thumbs up or thumbs down, the system can get quick and easy feedback from the user to train itself with.</figcaption> </figure> <p><strong>Solution:</strong></p> <p>When providing an output, the system prompts the user to rate the quality of that output. This could be in the form of a simple thumbs up / down, or require a little more granular feedback in terms of a star rating or other score.&nbsp;</p> <p><strong>Discussion:</strong></p> <p>Whether this is fed directly back into training the algorithm or is captured for analysis outside the system will depend on the specifics of the implementation. If the feedback is being captured for analysis, then it makes sense to prompt the user to also provide <a href="/patterns/86/qualitative-feedback-training">Qualitative Feedback for Training</a>, with that feedback mechanism triggered by a negative quantitative feedback.</p> </div> Fri, 15 Sep 2017 00:00:00 +0000 leighbryant 81 at Randomized Inputs <span>Randomized Inputs</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>mattiealston</span></span> <span>Tue, 08/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/41" hreflang="en">Autonomy &amp; Control</a></div> </div> <div><p><b>Problem</b></p> <p>Sometimes a system is so new to the user, or so complicated, that they don’t know where to start, especially in a system that algorithmically generates imagery or sound. Rather than learning how the system works, the user would like to just see it in action as quickly as possible.&nbsp;</p> <figure><img alt="A screenshot showing the Boomy application's randomize feature." data-entity-type="file" data-entity-uuid="647584c7-2949-4dc5-8224-6c8e6f79f454" src="/sites/default/files/content-images/Randomise_seed_Boomy.png" style="width:100%" /> <figcaption>Boomy generates new music based on random variables, quickly producing output which the user can later tweak.</figcaption> </figure> <p><b>Solution:</b></p> <p>Instead of prompting the user to enter inputs one by one, a single click on a “generate” button seeds random values into the system and produces output based on those variables. The user may&nbsp;then be given the option to&nbsp;edit the results.</p> <p><b>Discussion:</b></p> <p>While this is a very effective pattern for first use and feature discovery, there is no reason to limit it to just that use case. In many instances, generative systems can produce pleasantly surprising results from random prodding, and allowing the user to “spin the wheel” facilitates playful exploration and serendipitous discovery. This may&nbsp;be&nbsp;more rewarding than directed or goal-orientated use, in certain instances.</p> <p>Obviously the nature of the algorithm itself will determine how good the results are when generated from random values rather than considered ones, but even if the algorithm itself is prone to wildly variant results when operated via all possible variables, a randomized inputs&nbsp;feature can be tweaked to work within a sensible subset of variables that produces a good range of output.&nbsp;</p> </div> Tue, 15 Aug 2017 00:00:00 +0000 mattiealston 256 at Semantic Search <span>Semantic Search</span> <div> <div>Application</div> <div><a href="/taxonomy/term/21" hreflang="en">Input Data</a></div> </div> <span><span>mattiealston</span></span> <span>Mon, 05/15/2017 - 00:00</span> <div> <div>Topic</div> <div><a href="/taxonomy/term/36" hreflang="en">Transparency &amp; Trust</a></div> </div> <div><p><b>Problem:</b></p> <p>When the user is searching a catalogue of items, they want the system to return results based on that system’s understanding of the content of those items, not just via matching keywords contained in each. &nbsp;</p> <figure><img alt="A screenshot showing Microsoft Academic search converting free text input into semantic tags." data-entity-type="file" data-entity-uuid="f1c88874-0656-474c-a959-bcf1de07c2e8" src="/sites/default/files/content-images/semantic_search_Microsoft_Academic_0.png" style="width:100%" /> <figcaption>Microsoft Academic search turns the user's input into semantic tags. Via machine learning, these are already associated to items it has indexed, allowing it to return more intelligent results than just via keywords.&nbsp;</figcaption> </figure> <p><b>Solution:</b></p> <p>The system parses the user’s text input into semantic variables. Using&nbsp;machine learning-generated mapping and the ability for the system to learn about the content of items, it can use those semantic variables to return more meaningful results than just using keywords.&nbsp;</p> <p><b>Discussion:</b></p> <p>This is, of course, the true promise of all AI and ML powered software — rather than relying on precise input from the user to achieve optimal results, a system can take more ambiguous natural language, extrapolate the user’s intent from that, and combine that with an understanding of the actual meaning of the data. We are now seeing how effectively this can be implemented — one challenge will be in how quickly expert users who are used to “speaking computer” through decades of conditioning are able to adapt to software that is actually easier to use.&nbsp;</p> </div> Mon, 15 May 2017 00:00:00 +0000 mattiealston 216 at