The importance of trustworthy Artificial Intelligence

Researchers from Linköping University have developed a roadmap, intended to guide policymakers towards the future of trustworthy Artificial Intelligence.

Artificial Intelligence (AI) is having an increasing presence in our everyday lives, and this is believed to be only the beginning. For this to continue, however, it must be ensured that AI is trustworthy in all scenarios. To assist in this endeavour, Linköping University (LiU) is co-ordinating TAILOR, an EU project that has developed a research-based roadmap intended to guide research funding bodies and decision-makers towards the trustworthy AI of the future. ‘TAILOR’ is an abbreviation of Foundations of Trustworthy AI – integrating, learning, optimisation, and reasoning. 

About the TAILOR project

Funded by EU Horizon 2020, TAILOR is one of six research projects created to develop the AI of the future. TAILOR is drawing up the foundation of trustworthy AI, by producing a framework, guidelines, and a specification of the needs of the AI research community.   

The roadmap presented by the project is the first step on the way to standardisation, where decision-makers and research funding bodies can gain an understanding of the development of trustworthy AI. Research problems must be solved, however, before this can be achieved.  

Fredrik Heintz, Professor of Artificial Intelligence at LiU, and co-ordinator of the TAILOR project, emphasised the importance of trustworthy AI, explaining: “The development of Artificial Intelligence is in its infancy. When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now.” 

© iStock/putilich

Three criteria for trustworthy AI have been defined by the researchers: it must satisfy several ethical concerns, it must conform to laws, and its implementation must be robust and safe. These criteria pose challenges, however, especially the implantation of ethical principles.  

Heintz explained: “Take justice, for example. Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years.”  

Basic research into AI is a priority  

Large comprehensive questions will be the project’s central focus and standards will be developed for all those who work with AI. However, Heintz believes that this can only be achieved if basic research into AI is a top priority.  

“People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it. If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people,” said Heintz. 

Several legal proposals written within the EU and its Member States are written by legal specialists, but it is believed that they lack expert knowledge within AI – a serious problem according to Heintz.   

“Legislation and standards must be based on knowledge. This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type.”  

The complete roadmap is available at Strategic Research and Innovation Roadmap of trustworthy AI. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network