Strengthening Europe’s industrial AI leadership - Orgalim at the Politico AI Summit
2 October 2020
This week saw high level stakeholders from the world of AI come together both virtually and live to discuss the future of European AI at the #POLITICOAI Summit, supported by Orgalim. Among the speakers addressing key questions around how to boost Europe’s AI leadership while addressing ethical concerns were Commission Executive Vice President for a Europe fit for the Digital Age, Margaret Vestager, European Commissioner for Justice, Didier Reynders, (former) Belgian Minister for the Digital Agenda, Philippe de Backer and Orgalim’s Director-General, Malte Lohan.
Malte took part in the most extensive session of the two days alongside MEP Eva Kaili, Director of AI and Industrial Strategy at DG CNECT, Lucilla Sioli, and other expert panellists, to bring the industry’s perspective on how the EU’s AI strategy can help Europe compete in the global race for AI.
First off, he pointed out that while the USA and China may be far ahead on AI generally and business-to-consumer AI in particular, Europe has a clear lead in industrial or business-to-business AI. And with industrial AI set for rapid growth, "Europe is uniquely positioned to seize the opportunity to be part of the next wave of AI and the benefits that follow, not least for European resilience and the green and digital transitions."
Europe’s technology industries have many years of experience in embedding AI in manufacturing processes. It is important to recognise that, when it comes to industrial AI, "we are not starting from zero: there is a very solid, very comprehensive legislative framework already in place that already addresses many of the issues of concern."
Other panellists also acknowledged the need for differentiation between high-risk and low-risk AI. Eva Kaili, who chairs the Future of Science and Technology Panel in the European Parliament (STOA), said "a risk-based and sector-specific approach will help to be faster with the low risk sectors." Lucilla Sioli said, “we’re not interested in developing rules that apply as a blanket to all possible AI applications” and shared that, in the recent public consultation, there was widespread support for the idea of focusing on high-risk applications.