Autonomous vehicles made safe with the world’s first ethical algorithm

Experts at the Technical University of Munich (TUM) have pioneered the world’s first ethical algorithm for autonomous vehicles, which could see autonomous driving become the norm globally.

The researchers’ ethical algorithm is significantly more advanced than its predecessors, as it fairly distributes levels of risks instead of operating on an either/or principle. The algorithm has been tested in 2,000 scenarios of critical conditions in various settings, such as streets in Europe, the US, and China. The innovation could improve the safety and uptake of autonomous vehicles worldwide.

The research, ‘An ethical trajectory planning algorithm for autonomous vehicles,’ is published in Nature Machine Intelligence.

Why are autonomous vehicles not widely used?

Before self-driving vehicles can be used on the streets on a large scale, there is a range of technical issues that need to be overcome to optimise safety. When developing algorithms for autonomous driving, ethical questions play a vital role.

For example, the software needs to expertly handle unforeseeable events and make necessary decisions in the event of an impending accident. It is not as simple as the vehicle just being able to drive itself – making real-time decisions to avert unpredictable circumstances is critical to people’s safety.

Maximilian Geisslinger, a scientist at the TUM Chair of Automotive Technology, explained: “Until now, autonomous vehicles were always faced with an either/or choice when encountering an ethical decision. But street traffic can’t necessarily be divided into clear-cut, black-and-white situations; much more, the countless grey shades in between have to be considered as well. Our algorithm weighs various risks and makes an ethical choice from among thousands of possible behaviours – and does so in a matter of only a fraction of a second.”

How does the algorithm make ethical driving decisions?

The ethical parameters of the algorithm’s risk evaluation were defined by an expert panel as a written recommendation from the European Commission in 2020. This included basic principles such as for the worst-off and the fair distribution of risk for all road users.

The researchers converted these rules into mathematical calculations by classifying vehicles and people moving in the street according to the risk they pose to others and their respective willingness to take risks. For example, a truck can cause serious damage to other road users whilst, in many scenarios sustaining minimal damage to itself. The opposite is the case for a bicycle.

Next, the algorithm was taught not to exceed a maximum acceptable risk in the various respective street situations. The team also added a variable that accounts for the responsibility of traffic participants – such as obeying traffic regulations.

Earlier algorithms approached serious traffic situations with only a small number of potential manoeuvres, leading autonomous vehicles to stop completely in unclear cases. The team’s novel algorithm integrated code results with more possible degrees of freedom with less risk for all.

For example, in the event that an autonomous vehicle wants to overtake a bicycle, but a truck is approaching in the oncoming lane, existing data on the surroundings and individual participants are now employed.

Parameters such as can the bicycle be overtaken without driving in the oncoming traffic lane while maintaining a safe distance to the bicycle, the risk posed to each respective vehicle, and the risk these vehicles present to the autonomous vehicles themselves are calculated.

In an unclear situation, the algorithm will instruct the autonomous vehicle to wait until the risk to all participants is acceptable. It avoids aggressive manoeuvres, replacing traditional yes and no parameters with a comprehensive evaluation containing many options.

Franziska Poszler, a scientist at the TUM Chair of Business Ethics, said: “Until now, often traditional ethical theories were contemplated to derive morally permissible decisions made by autonomous vehicles. This ultimately led to a dead end, since, in many traffic situations, there was no other alternative than to violate one ethical principle. In contrast, our framework puts the ethics of risk at the centre. This allows us to take into account probabilities to make more differentiated assessments.”

What is next for the algorithm’s development?

The researchers explained that even though algorithms using risk ethics can make decisions based on the ethical principles of each traffic situation, they cannot ensure accident-free street traffic. Therefore, moving forward, additional differentiations, such as cultural differences in ethical decision-making, will need to be considered.

Currently, the algorithm has only been validated in simulations. The team aims to test it on the streets using the research vehicle EDGAR to refine the technology further.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network