AI Must Be Designed for Transparency

Artificial Intelligence is here, and has been for more than 20 years. So why is it then that AI tends to bring about feelings of anxiety and fear? What information is missing for users to feel that they can trust their data with AI?

We may not think of it as AI, but every time Google gives you search result, or when Netflix provides a movie recommendation, you’re interacting with a type of artificial intelligence called machine learning. Machine learning, at its most basic, is the ability of an algorithm to identify patterns in observed data, and predict things based off of that data. Lately, we’ve seen a proliferation of machine learning in personal assistants like Amazon Alexa or Apple Siri. The point being, we use AI in our daily lives and don’t put too much thought into it.

<>

These innovations in smarter technology have helped us become more efficient, streamline tasks, and better organize our lives. However, some machine learning based platforms are showing up in our lives as intrusive.

What were once useful product recommendations have now become invasive advertisements or loosely relevant content on social media pages. As artificial intelligence improves (which it is, at an exponential rate) we should put more thought into how users can understand what it’s doing with their data.

What information is missing for users to feel that they can trust their data with AI?

Most people are unfamiliar with how, where, or why suggested information is surfaced to them. This uncertainty often translates to fear of adoption by users. For instance, a friend recently was concerned when she opened her computer and saw advertisements for a product she had been texting a friend about. This was concerning. How did her PC get this info from her phone? Where else is her data going? Suddenly, she was asking questions and feeling uneasy about her computer and her phone.

The data transfer in this case might be entirely benign. At some point, she may have searched for the product on her phone and simply forgot about it. Regardless, the experience wasn’t designed in a way where she was able to understand where and how her data was being used. Users need to know where this ‘smart’ information is coming from, what is being done to it, and how it can be managed.

Users need to know where ‘smart’ information is coming from, what is being done to it, and how it can be managed.

Technology companies have a history of designing in a vacuum, implementing technology, and then asking for forgiveness when things go south. There are more benign issues, like the one with the info between PC and smartphone that we outlined above.

But, then there are more serious ones like the Facebook-Cambridge Analytica scandal, where millions of Facebook users’ private data was breached. Across these incidents, the same problem arises. There is a lack of transparency for both companies and their customers in how their data is being used. As UX professionals, we’re moved to understand how we can craft experiences that support, rather than confuse, frustrate, or harm users.

<>

With the rapid growth in AI, there’s an opportunity for a larger a larger discussion within the UX community to facilitate a more seamless integration of data management and transparency for both companies and their customers.

If we get in front of the problem, and design artificial intelligence systems with the end user in mind, we can help companies instill confidence in users that AI has a powerful capacity to better their lives.

Share

facebook icon twitter icon linkedin icon