Gaining user trust

So you’ve gone and tried to save the world (read: solve a business problem) by making an amazing machine learning system, after carefully curating your data, considering the implications of your plans, and following along with the steps in our ITDF Blog Post.

It doesn’t actually save the world until people want to use it.

In recent years, end users have been stung many times by exploitative software and abuse of their online data. Of late, we’ve seen some technologies take nasty turns - people have been the victims of hate-speech on microblogging platforms, their social media data exploited for monetary or political gain, and even fallen victim to misinformation in the form of so-called “fake news”. I think it is probably fair to say that the users of our systems are growing apprehensive of software once again.

We need to regain the trust of our users and potential user markets before it is too late, and that bond is gone forever. However, this is far easier said than done. It is a continuous process that takes time. It gets even harder when the phrase ‘machine learning’ is thrown around, with media reports generating articles that misinform the public, and Hollywood to thank for us casting our minds to films like Terminator.

Without the trust (and most importantly, consent) of our users, often we have far less data to learn from - including one of the truly vital avenues of data, user feedback.

As Caitlin McDonald sums up so well in her article Data Citizens: Why We All Care About Data Ethics:

“Data citizens are impacted by the models, methods, and algorithms created by data scientists, but they have limited agency to affect them. Data citizens must appeal to data scientists in order to ensure that their data will be treated ethically.”

When you are dealing with data, you are dealing with real-life humans. Caitlin makes a great number of arguments regarding pushing for fair and useful ‘data science’. As users of that data, we should aim to utilise it in the fairest way for everyone’s benefit.

This highlights a key issue with using human data. How do we give these citizens some control back? It certainly doesn’t have a trivial solution.

Furthermore, Black Pepper have carried out consumer research regarding current public opinions on Machine Learning. The report generated from the findings, which can be found here, highlights an overall confusion in the public (43% of adults surveyed mentioned some confusion), with many unsure of what is is and if it is used in the goods and services that they consume (42%).

The report goes on to discuss levels trust of Machine Learning systems, and how it changed based on where it was used. The results were consistently low, with trust levels ranging between 38% (navigation systems) all the way down to 7% (children’s education).

Finally, more than half (52%) were worried about ML making choices that impact them without their prior knowledge.

These findings are appalling - and it is our job as technologists to help improve this.

Rawpixel 780506 Unsplash
Photo by rawpixel on Unsplash

So how do we help our users understand, and how do we begin to rebuild the bridges that have been torn down?

For me, what it really what this boils down to is:

“Machine learning is for people, not just profit.”

We have to care about our users, and we have a responsibility to use data fairly when dealing with our users. We need to drive human value before we can drive business value, and this should be done with as much honesty and in as apparent a way as possible.

And this isn’t contradictory to a profitable system.

I believe, if you gain user trust, be that by allowing some transparency in artificially intelligent systems, listening to users and gathering feedback, the list goes on, you will gain user base (or at the very least, reduce any alienation of the existing users), and you’ll gain essential advice to help achieve more accurate systems, eventually driving value to consumers as well as the business.

To achieve this, we need to stop and think before we blindly do. Will you?

Download Image 02

Download your copy of our research report here

Download

This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

X