EIT Digital – focusing on the future of Artificial Intelligence

EIT Digital CEO, Willem Jonker, spoke to Clifford Holt, International Editor at The Innovation Platform, about the issues of digital sovereignty and Artificial Intelligence, both of which have been covered by the organisation’s recently-published policy reports.

EIT Digital aims at global impact through European innovation fuelled by entrepreneurial talent and digital technology. This year, EIT Digital celebrates its 10-year anniversary by looking back at some of its successes and forwards to some of its future priorities and expected impacts. EIT Digital has grown from 30 members to 300 over the last decade and, moving forwards, will focus on solidifying Europe’s digital sovereignty and identifying which key technologies should be invested in, amongst other areas.

The Innovation Platform’s International Editor, Clifford Holt, caught up with EIT Digital’s CEO, Willem Jonker, to discuss such issues as digital sovereignty and Artificial Intelligence, which have both been covered by EIT Digital’s recently-published policy reports.

EIT Digital will soon celebrate its 10-year anniversary. In recent communications, you have promoted technology that serves an inclusive, fair, and sustainable digital Europe. Can you tell us more about how EIT Digital contributes to support those values?

Firstly, EIT Digital contributes to this through the delivery of our innovation and education activities; we train people, we retain talent in Europe, and we help entrepreneurs to develop their ideas. And through those activities, we embed European values in the way we treat technology – technology that needs to be enabled and which needs to serve a purpose. Respecting our European values, it is important that such technology is inclusive, fair and, of course, sustainable.

Many current technologies have had a hugely beneficial impact; they have enabled global connectivity, for instance; they have delivered a wealth of information to your fingertips, and they have provided real economic value. At the same time, however, there are challenges, particularly when it comes to issues such as platform dominance, and people may now be concerned about their privacy, or, even further, may be concerned that new technologies will replace them in their jobs. There are also societal concerns around technologies such as AI – what benefits it has and whether it can be controlled.

These concerns need to be addressed, and Europe takes a different position compared to other parts of the world in how it approaches such challenges. With GDPR, for example, Europe demonstrated the value we place on the privacy of our data, and when it comes to AI, Europe is already addressing some of the ethical issues.

In the recently-released documentary entitled The Social Dilemma it is argued that the essence of a tool is that you take it when you need it, rather than it making you use it when it needs you. This is an important point because many people now feel that certain digital technologies, such as social media platforms – particularly when they are available on mobile devices – actually have the opposite effect: people may feel as though they need to do something for the tool (they see that there is an alert of some kind and it needs their attention) which does not have any particular benefit for them. As such, while the power of digital technology has now come to be recognised, it should also be understood that it should serve a purpose, it should serve citizens, and not the other way around.

These are the values that we are working to bring into Europe’s digital future, and that means creating open systems and an equal playing field. That is achieved partly with regulation, and partly by being a player. And, of course, the latter is where Europe has a certain weakness, at least in some specific domains. For example, most Europeans now have smartphones, but very few are made and developed in Europe – they will be an iPhone from the US or an Android phone from South Korea. Indeed, many young people won’t even be aware that there are European companies developing similar technology.

This means that a very limited number of players dominate the marketplace, which is unhealthy from numerous perspectives. From an economic point of view, the whole economy is driven by competition and by creating equal playing fields and making sure that the markets are not dominated by a chosen few. Regulation clearly has a role in achieving this, but this needs to be done via an open framework and, moreover, it has to be ensured that there are more than just two players who will be able to use the level playing field that has been created. This is where Europe now needs to focus, by stimulating the creation of such actors and by rebuilding other areas. And this is not to dominate, but to offer choice.

future of Artificial Intelligence

At a time when COVID-19 and also 5G debates question the digital sovereignty of Europe and based on the fact that EIT Digital recently launched a policy report in which you provide an overview of policy motivations, trends, instruments, and the roles of various actors in defining the perception of and perspectives for Europe’s digital sovereignty, could you tell me more about what main concerns need to be addressed and how they can be solved?

The coronavirus has acted as both a magnifier and an accelerator. Many of the issues were already there; they did not emerge because of COVID-19, but the virus has served to make them extremely visible.

One such issue concerns the disadvantage of being dependent on global supply chains. This is certainly not restricted to digital, but also many other areas, and COVID-19 has acted as something of a wake-up call for quite diverse domains that are now considering reshoring production to Europe, and so on. When it comes to digital, as we have already discussed, certain areas are dominated by non-European players, and so we need to ask ourselves whether we want that situation to persist.

With 5G, we have seen a combination of geopolitical arguments, security arguments, and technology arguments all coming into the mix. Europe is quite well-positioned here because it is one of the few areas where we have strong European players: both Nokia and Ericsson are delivering 5G technologies. As such, in principle at least, 5G should be an area of less concern for Europe than others, despite the fact that the discussion around non-European suppliers and security threats has served to make 5G an example of the need for European digital sovereignty.

In terms of digital sovereignty itself, this does not mean that everything has to be done at home. There is a global market, and regardless of the technology you are developing there will always be a need to source components from around the world, and that is not going to change. Digital sovereignty is more about having control and also about being able to safeguard certain values. Of course, in terms of cybersecurity, it is also about safeguarding not only against straightforward cyberattacks but also against economic disadvantages. If you do not have access to the newest generation of technology, you are always one step behind.

Indeed, it is often necessary to distinguish whether you need to control a certain technology because it is so vital for the development, economy, and security of your country, or if it is a technology that can be shared. Finally, it may simply be a technology that you need to access. Those different layers need to be recognised.

The term ‘digital sovereignty’ has almost become a buzzword and is often applied as a generic phrase. But it should instead be realised that it means different things in different domains and so different actions will be required. There are alternatives to 5G already being developed, for instance, which means a different kind of action will be needed here in comparison to social media platforms, where it is going to be incredibly difficult to develop alternatives that have the same power and impact.

Nevertheless, there is now a growing awareness of this as a result of COVID-19, and we now need to ask what we need to do. This needs more analysis and, indeed, more differentiation, and so we also need to ask what the key technologies are, which ones we need to control, and which ones we only need access to. To return to the example of 5G, there is the argument that controlling the infrastructure is necessary to ensure cybersecurity. However, controlling the core components in a network will not necessarily mean that the security threats are also controlled.

future of Artificial Intelligence

EIT Digital has also published a policy report about Artificial Intelligence. Could you tell me a little more about how AI can impact innovation potential, fairness, trust, and growth opportunities? How will EIT Digital support that?

We decided to write the AI report together with a couple of our sister organisations under the EIT umbrella as we are aware that AI challenges are different in different domains. Indeed, one of the report’s main conclusions is that a one-size-fits-all approach to Artificial Intelligence policy is inadequate. Similarly, with regards to GDPR – which gave Europe enormous prestige in this domain and where we do have a one-size-fits-all data policy – we have also argued that we should build on differentiation

This means that there is a need to distinguish between the different types of data. For example, the sensitivity levels for personal data vary, and so having the same levels of protection and sensitivity for all personal data – for both a person’s health as well as their social habits, for instance – doesn’t make sense. This can go even further, too – a person’s health data can range from information about a serious illness to information about very minor ailments and so, again, the sensitivity varies.

A completely different category of data is machine data. This could, for example, be data that is continually read out in a factory to predict when maintenance will be required. This will clearly need a different approach to that being taken for a person’s health data.

As such, our report argues that differentiation is necessary and that this can be achieved by building on the successes we have seen with GDPR. Having a regulatory environment that is too strict, that is based on grounds that are not applicable to the underlying data, and which doesn’t allow certain applications, however, blocks innovation.

When it comes to Artificial Intelligence, this concerns the interpretation of data. Here, it is important to understand how an algorithm reaches its conclusions, whether that can be made transparent and explainable, as well as how biased the algorithm is. It is therefore also important to understand that AI can also vary, and that no two algorithms will necessarily reach the same conclusions, while many also include a bias – whether intentionally or as a remnant of the developer’s own unconscious bias which has been transferred when the algorithm has been coded. This, then, needs to be better understood and also regulated.

There is also a need to put into place a system whereby the fallibility of AI is recognised; the conclusions reached by a machine may not always be the right ones, no matter how advanced the algorithm. Thus, just as we see in both the medical and legal fields, a second opinion is often necessary. The same should be true for AI as the algorithms in a machine can have a certain interpretation and certain embedded values which lead to different outcomes.

In the health sector, AI has shown great potential in discovering the causes of diseases by being able to analyse large data sets. However, this can often be hindered by the fact that we are highly fragmented in Europe, as both the laws governing the availability and accessibility of data and how that data can be used often differ from country to country. These restrictions are typically the result of actions that have been taken to protect data and privacy, which, of course, is crucial, but they can also hamper innovation.

With machine data it is possible to be much more liberal because the underlying data is less sensitive and outcomes are also more predictable, meaning that stabler algorithms can be developed.

Our reports (available on our website) highlight all of these important points and more.

Willem Jonker
CEO
EIT Digital
info@eitdigital.eu
Tweet @EIT_Digital
www.eitdigital.eu

Please note, this article will also appear in the fourth edition of our new quarterly publication.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Similar Articles

More from Innovation News Network