According to the 2021 review of the Coordinated Plan on Artificial Intelligence (AI), Europe can build strategic leadership in seven action areas – environment, health, robotics, public sector, home affairs, transport, and agriculture. It is more than clear that these fields will play a leading role in the following months, whether it is climate emergency or changes in healthcare due to COVID-19 pandemic.
Obviously, the momentum was amplified during the COVID-19 outbreak with the benefits of the technology becoming more tangible as governments and companies used AI tools to combat the virus. For instance, two tech companies in the Netherlands trained an AI system to detect COVID-19 via X-ray scans, helping more than a hundred hospitals ramp up testing.
Luckily, Europe has the resources at its disposal to keep up the pace. We have a large number of strong incumbent industries, the world’s largest single-market area, a sturdy legal framework, excellent public services, and many companies and small and medium-size enterprises that are leaders in their fields. Europe also boasts high-quality education and research capabilities; it has more professional developers than the United States and has been the most prolific publisher of AI papers over the past 20 years.
Although I am a supporter of technologies and digitization, technological progress must not be a step backwards from a fundamental rights perspective. Unfortunately, such threats to fundamental rights can be posed by certain AI applications, especially in the above-mentioned areas. Clearly, the level of such threats can differ, for example chatbots represent a different threat than autonomous cars or technologies that could endanger our lives.
The use of artificial intelligence in Europe is expanding rapidly; however, there are no clear boundaries and without those, it could get out of hand and harm the whole society. It can easily happen that our every step will be watched – as we are already experiencing on a certain level with facial recognition being used by many airports in Europe. In another scenario, companies would be able to sell products to authoritarian regimes, which we know is already happening on some level in Israel.
As Europe, we should take a strategic lead in artificial intelligence, while maintaining our core values and deep respect for fundamental rights. Even though we should not resist nor fear the progress, the necessary legal framework needs to be adopted. I believe that by setting rules and boundaries, we will be able to benefit from artificial intelligence as individuals and as a society.
I work on a draft legislative opinion which will be presented in the European Parliament’s Committee for Culture and Education (CULT) in mid-February. I am doing my best to fill all the gaps that I have found and identified to ensure safe usage of artificial intelligence.