During the pandemic, we have seen unprecedented openness towards data sharing. It may provide a valuable example of how to build trust in artificial intelligence (AI) technologies going forward. I met with Kalliopi Spyridaki, Chief Privacy Strategist for SAS Europe and Asia Pacific, to discuss the latest development in data governance and AI prior to the virtual event Artificial Intelligence and the Ethics Mandate, organised by SAS and Forbes on March 16 at 11:00 CET.
- Before joining SAS, Kalliopi worked in European and Greek law firms, a trade association, a public affairs firm, the European Commission and the Greek Ministry of Foreign Affairs.
In her role today, Kalliopi focuses on laws and government policies that affect SAS and its customers related to, among others, privacy and data protection, general data governance, and artificial intelligence.
Before the pandemic, many individuals were hesitant to hand over sensitive personal information to the government. Has COVID-19 changed how we think about sharing data?
The pandemic’s disruption may indeed have led to a profound shift in our attitudes towards the exchange of data. Amid a global pandemic, people have become more willing to share their data with government test, trace and isolate programmes, wanting to play their part in overcoming the current crisis.
For many reasons, some countries have been more effective at containing the virus than others. But for all the differences in each country’s national response, the successes noted across Europe and Asia Pacific find common ground in a commitment to upholding data protection standards. It is likely the General Data Protection Regulation (GDPR) has had a domino effect in driving confidence with data sharing for Europe and its trading partners.
This unprecedented openness towards data sharing driven by the pandemic may provide a valuable example of how to build trust in artificial intelligence (AI) technologies going forward. Notably, in order to put in place the foundations for trusted AI, we need to develop robust and secure technology, create a culture of digital trust that encourages data sharing, and design a regulatory framework that promotes the responsible use of AI.
Building digital trust will be essential to the adoption of AI tools. What is happening in the European legislation related to AI this year?
If the digital trust felt by citizens has contributed to the success of many test and trace programmes, the upcoming AI legislation will likely help entrench this trend in the realm of AI.
In April 2021, the European Union (EU) will propose the first horizontal legislation on AI globally. The new law will set out rules for the trustworthy development and use of high-risk AI. The specifics are still taking shape, including, for example, the definitions of “high risk” and “AI,” as well as rules on conformity self-assessments that will need to be undertaken by organisations that develop and use AI in Europe. In general, the legislation will aim to advance transparency, accountability, and consumer protection for all AI applications. This is likely to be achieved by requiring organisations to adhere to robust AI governance and data quality requirements.
This law is expected to be negotiated among the EU institutions and global stakeholders for a couple of years before it comes into effect. The probable impact on the AI market is expected to be profound, resulting in responsible AI by design as the European – but potentially also global – standard. At least Europe’s strong trading partners will see the benefit in introducing equivalent rules for AI developed in their jurisdictions.
At the same time, in the frame of the recently adopted European Data Strategy, the EU proposed a Data Governance Act at the end of 2020 and will introduce a Data Act in 2021. When the new rules enter into force, they will foster the increase of more meaningful business-to-government and business-to-business data sharing. The aspiration is to create a genuine single market in Europe for data and common data pools that organisations can tap for growth and innovation. This development will further support AI innovation and greater uptake of AI applications by governments, businesses and citizens.
In view of these rapid and ambitious regulatory developments in Europe, all organisations that operate in the AI market should be considering today how they will be designing, using and positioning their AI products and services. Organisations should aim to adhere to – but also go beyond – the new rules by innovating around AI product and service characteristics that will give them a competitive advantage in view of increased customer demand for ethical and responsible AI.
Meet Kalliopi in a virtual event for organizational leaders on March 16 at 11:00 a.m. CET. The conference is aimed at executives, senior management and thought leaders who want to drive the digital transformation of their organisations without overlooking the ethical aspects. To register and secure your spot, visit Artificial Intelligence and the Ethics Mandate.