It’s hard to believe another year has past and we are once again back from the IT Directors Forum with tales of great meetings and late nights. Despite its new format (no longer spending three days at sea on the Aurora) it was still a great event and we were lucky enough to be given the opportunity to host a speaking slot again.
Being based on dry land allowed us to bring our colleagues, Ian Robinson and Sam Warner, to the event to present on behalf of Black Pepper.
Ian and Sam presented a fascinating talk, describing a recent project we undertook with Jaguar Land Rover and exploring ethical Machine Learning. For those that missed it, here are five key takeaways from their talk:
1. Is your information in formation?
- Do you have eyes on your data?
- Do you know what you record?
- Are you prepared? What are your processes?
- Think for the future - remember, data is cheap!
2. Think like a machine
- Do you understand how AI/ML can help you?
- Do you understand where it can’t help you?
- Can the decision be made from the data alone?
3. You are still responsible!
- Are you using the right data?
- Can you explain why this data is pertinent to the problem?
- How do you source your data?
- Can the quality of it be improved?
- Is your data representative?
4. The past is the past
- You can provide life changing software
- Most of you already do!
- Don’t just be a black mirror to the past
- This is an easy trap to fall in to - but you can do so much better. Don’t reflect the inequalities of the past, but steer towards a fair future view.
- Is your data even relevant anymore?
- Algorithmic biases are very dangerous - we can’t trust machines to ‘just work’. Learning must take place in a deliberate and engineered way.
5. Machine Learning for people
- Where you provide value to people, profit should follow
- AI/ML are not silver bullets
- How can you help your
- Working together as a team of humans and machines, we can trust the specific decisions being made while remaining sensitive.
The entire talk seemed a great hit with the audience, and sparked great discussion between attendees. Some highlights of this included:
- Differentiation between “What we want to do” vs “If we should do it, and how to mitigate ethical risk”
- Observations on the idea of humans mimicking machines to mimic humans
- “Why should transparent machine learning be a priority for us when humans aren’t always explainable either?”
- “Where can I go to learn how to learn?”
Over the coming weeks and months there will be an influx of blog content appearing here to answer some of these questions, expand on the key takeaways and more. Watch this space!