This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

×

This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

×

Security & Ethics - two key considerations for the automotive industry

19 different car companies are racing to put self driving cars on the road by 2021 with players such as Tesla and Google leading the pack. This will be a life-changing innovation for current and future generations. However, with no historical data to use or research to build upon, car companies fall into the precarious situation of going in blind to manufacture safe and affordable autonomous vehicles.

That said, cars currently in production are already creating masses of data which manufacturers are utilising to create better cars, from sat-navs and cruise control to Bluetooth and parking sensors. But managing the amount of data being created and controlling the influx of information is becoming overwhelming for automotive companies, with car data monetisation predicted to be a $750 billion business by 2030. As more vehicles connect to the internet, the automotive industry must analyse and understand the data being created from these vehicles in order to create the autonomous vehicle with the highest quality of security, customer experience, efficiency and more. Failing to do so is a matter of life and death for passengers, as well as innocent bystanders.

Despite this, there are still two key areas within the automotive industry which are shrouded in uncertainty and until businesses resolve these issues, putting driverless cars on the road is but a distant pipedream for them.

Security

As with any technology, cyber security is a key concern in the drive towards autonomy. In fact, more than 66 per cent of automotive companies are concerned with addressing security issues associated with connected cars. It’s easy to see why. In September, a group of Chinese security researchers took remote control of a Tesla Model S from 12 miles away, tampering with the vehicle’s brakes, dashboard, door locks and numerous other electronic elements of the car. Although the hack was carried out in test conditions, this would prove disastrous in the real world. It would only take one death caused by a cyber attack, and consumer trust in autonomous vehicles would disappear.

Tesla updated the software the following day and this instantaneous response will be key to the success of driverless vehicles. As with the technology found in our homes, from games consoles and Netflix to laptops and mobile devices, patch management is central to safeguarding against cyber attacks. The difference here though, is that while hacking a Playstation may see financial information compromised, it’s safety which is at risk if a car is hacked. Automotive businesses will quite literally have human lives in their hands, so there is zero room for error.

Many devices in the home are updated fortnightly, if not more frequently. For autonomous vehicles, patches must be much more regular to insure against cyber attacks. If vehicles were only updated fortnightly, the opportunity for life-threatening hacks would increase exponentially as criminals would have that much longer to find a way to infiltrate the system. Whether this is done with the aim of stealing cars remotely, or for more sinister reasons, the potential for catastrophe is huge. Until the entire automotive industry seriously tackles cyber security, safety will always be a concern.

Ethics

Security isn’t the only issue automotive businesses have to tackle before their autonomous vehicles become a reality. Although the threat of hacking is potentially life-threatening, self-driving cars can’t hit the roads until the technology can assess its surroundings in real-time and make ethical decisions.

Life and death decisions are difficult enough for humans to make however. Take the ‘trolley problem’ for example. A man is stood by a railway line and spots a runaway train heading for a group of five workers on the tracks who are too far away to hear him and facing in the wrong direction to see any signals he makes. A lever next to him will divert the train onto another track, on which only one worker is present. The vast majority of people would pull the lever and sacrifice one worker to save the five.

The second instance of the trolley problem has the same set-up but with no lever. Instead, the man is stood next to a very large gentleman and he works out that if he throws him in front of the train, the impact will stop it and save all the workers. Responses from test groups given this problem overwhelmingly show people would be unwilling to become physically involved in another person’s death, even if it means saving all the workers on the tracks.

If we as humans struggle with such ethical decisions, automotive businesses have their work cut out to resolve such issues using technology. Mercedes has confirmed the software at the heart of its autonomous vehicles will protect the passenger above all others. Will other businesses follow suit though or will the ethics of such a decision result in a different approach? , For example, if a self-driving car is trained to always protect the passenger above all other actions, would it run over a group of school children which stepped out in front of the car instead of dangering the lone passenger by veering off the road? If the opposite was true and the car was programmed to save the largest amount of people, why would consumers agree to put themselves at risk by travelling in an autonomous vehicle? It is this type of question traditional automotive companies, as well as disruptive innovators such as Google and Apple, must answer before driverless cars become mainstream vehicles.

Driving Innovation

Both traditional vehicle manufacturers and technology giants entering the market for the first time must conquer the security and ethical issues autonomy poses. How this is managed will likely depend on the type of businesses they are though.

The likes of Apple and Google will have the technology knowledge in-house to tackle cybersecurity and patch management but will likely have either hired specialists or outsourced to third parties when it comes to manufacturing the product. On the other hand, traditional automotive companies will need to enlist third party support from technology experts to ensure their autonomous vehicles are safe, secure and offer a seamless customer journey in-line with the technology used in our everyday lives.

Taking advantage of big data and specialist help will result in increased technology innovation. Not only will vehicles be driverless, but mobile devices could speak to the vehicles directly. If a driver starts their commute to work at 6:30am every weekday, the engine can be set to turn on five minutes prior to this in the winter months (but with the drive disabled) to defrost the car. Similarly, MOT garages could communicate with customers when their vehicle is due to be tested and suggest when tyres should be checked or oil changed which should decrease the likelihood of a breakdown. It is this seamless experience big data supports and consumers will expect this level of interaction with their vehicles. Faults in autonomous cars are the difference between life and death for passengers and pedestrians, so it’s vital automotive businesses and technology experts are constantly collaborating to guarantee autonomous vehicles aren’t endangering passengers or other road users. Only then can a seamless driving experience be secured. After all, who will want to use a car which has gadgets which save them ten minutes per journey, but can’t guarantee their safety? Nobody. So ignoring these two forks in the road isn’t an option.