Stacy Baird - Europe’s Privacy Law - A Barrier to Artificial Intelligence or an Enabler?

Search

Loading...

News

Latest News

Dec 18, 2018
by Stacy Baird
Newsletter
Register for our Newsletter and stay up to date
Register now
Stacy Baird - Europe’s Privacy Law - A Barrier to Artificial Intelligence or an Enabler?

Intellectual property expert leads this month's online discussion on the Salzburg Questions for Corporate Governance by posing questions for boards about the EU's GDPR Stacy Baird at the 2018 program of the Salzburg Global Corporate Governance Forum

This article is part of the series, the Salzburg Questions for Corporate Governance by the Salzburg Global Corporate Governance Forum Join in the discussion on LinkedIn

Companies across the globe are dealing with the impact of Europe’s new General Data Protection Regulation (GDPR), as it has extraterritorial legal reach, revising privacy policies and practices (such as those annoying pop-ups about using cookies on many website, a notice required by GDPR). One of the topics of the work we were doing in Salzburg was whether boards needed to have expertise to address the use of AI in the company’s business processes and possibly, products and services. A question boards must consider is the implication of GDPR with the use of Artificial Intelligence (AI) or Machine Learning (ML). GDPR carries severe penalties, and significant privacy issues tend to carry high reputational cost. With the heightened concerns around AI, ML and privacy, there will be brighter lights shining on issues, when they arise. As your company moves into the use of these new technologies, are you prepared? Is your board?

With GDPR in effect just over six months, it is too early to know the impact – good or bad. Do you see GDPR as an impediment or an enabler of AI and ML for your company? Are there legal frameworks you can imagine or are aware of that may be a better approach? Is your company weighing these issues?

The more data processed by AI or ML system, the better and more accurate the technology is able to complete its tasks. When that data is personally identifying of individuals, questions come to the fore regarding privacy. There are also privacy concerns regarding the outputs of the AI or ML system that paints a portrait of an individual that may reveal personal attributes that the individual may prefer remain private. Sometimes, indeed, data may not be personally identifying, but could be compared with data that are, with the result of identifying an individual. The European Court of Justice has already held where this is “likely reasonably,” the former data moves into the class of data protected by the Data Protection Directive, the predecessor to the GDPR.

In Europe, the GDPR, in part, addresses these issues directly, stating in Article 9: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.”

The GDPR requires consent of a data subject (i.e. the person whose data is being processed) be freely given, specific and informed, and unambiguous – and by a clear affirmative act, such as a writing or speaking. “Specific and informed” means that consent is granted only for that particular purpose for which the consent is being sought, and does not extend to other (e.g., new) purposes. Further, consent can be withdrawn at any time and the individual has a right to have the data deleted (i.e., the right to be forgotten). 

An alternative to obtaining consent is to anonymize, de-identify or pseudonymize the data, which allows a data processor to use the data for purposes beyond which consent was obtained. However, the effectiveness of anonymization is only as good as the extent to which the anonymization is irreversible. As the Information Commissioner’s Office of the UK points out, it may not be possible to establish with absolute certainty a particular dataset is irreversible, especially when taken together with other data that may exist elsewhere.

GDPR Article 5 sets out “principles relating to processing of personal data” including “lawfulness, fairness and transparency; purpose limitation; data minimization; accuracy; storage limitation; integrity and confidentiality; and accountability.” Some of the principles may be contrary to the use of AI and ML, which must first collect as much data as possible, and then analyze the data after collection (the “learning” process). This process makes complying with the purpose limitation and data minimization principles challenging.

Article 22 protects data subjects from decisions based solely on “automated individual decision-making, including profiling” which produce legal effects or similarly significantly affects the data subject. The requirement can be overcome if the data subject gives explicit consent. As well, the restriction addresses decisions based solely on automated processing. Therefore, for decisions such as applications for credit, loans, health insurance, or in the case of job interviews, performance appraisals, school admissions, or court ordered incarceration, the automation can (and many would say should) be used to inform a human decision, not supplant it.

The use of an AI and ML for “decision-making including profiling” must also be “explainable” to the data subject. But it is an open question as to the extent of the explainability – and to what degree the data subject must understand. Barriers to understanding an algorithm include the technical literacy of the data-subject individual and a mismatch between the mathematical optimization in high-dimensionality characteristic of machine learning (i.e., conditional probabilities generated by ML) and the demands of human-scale reasoning and styles of interpretation (i.e., human understanding of causality).

There are competing views on whether the provisions of GDPR enable or are barriers to AI and ML. For example, does the GDPR right to withdraw consent weigh in the decision of a company to use the data? It may be a challenge to delete data in widely federated datasets, and doing so diminishes the “learning” based on the data. With each new use for data, the company is required to go back to get consent. Is that alone an impediment? With the growing range of devices collecting data (i.e., Internet of Things), will it be possible to get specific and informed consent as a practical matter?

In contrast to those raising concerns, Jeff Bullwinkel at Microsoft has written that the GDPR framework strikes the right balance between protecting privacy and enabling the use of AI – provided the law is interpreted reasonably.

What is your view? How is your company weighing these issues? Do you see the GDPR as an enabler? Blocker? Do you know enough about the GDPR to make informed decisions? Does the rest of your board know enough? Given the potential liabilities and risks to the company, do you think it should?

Have an opinion? 

We encourage readers to share your comments by joining in the discussion on LinkedIn


Stacy Baird is a Salzburg Global Fellow and consulting director at the Singapore-based consulting firm TRPC. His expertise lies in law and advising businesses and governments on information technology, privacy, data protection, cloud computing, and intellectual property (IP) public policy matters. Stacy also serves as executive director of the U.S.-China Clean Energy Forum Intellectual Property Program, where he helps address bilateral technology transfer and IP issues in the context of clean energy research and commercialization. Previously, Stacy served as Senior Policy Advisor to U.S. Senator Maria Cantwell, including work on the U.S. Patriot Act, and advisor to U.S. Congressman Howard Berman on issues of first impression related to the then-nascent internet and the mapping of the human genome. Prior to law, Stacy worked as music recording engineer with clients including Madonna, Stevie Nicks, Elvis Costello, Brian Eno, and Francis Coppola. He held appointments as Visiting Scholar at the University of Southern California College of Letters, Arts and Sciences and Visiting Fellow at the University of Hong Kong Faculty of Law. Stacy has a J.D. from Pace University and a B.A. in radio and television communications from San Francisco State University. 


The Salzburg Questions for Corporate Governance is an online discussion series introduced and led by Fellows of the Salzburg Global Corporate Governance Forum. The articles and comments represent opinions of the authors and commenters, and do not necessarily represent the views of their corporations or institutions, nor of Salzburg Global Seminar. Readers are welcome to address any questions about this series to Forum Director, Charles E. Ehrlich: cehrlich@salzburgglobal.org To receive a notification of when the next article is published, follow Salzburg Global Seminar on LinkedIn or sign up for email notifications here: www.salzburgglobal.org/go/corpgov/newsletter

Related Content

Shaping a Better Future with Breakthrough Technologies

Apr 11, 2019

Being a “Tech Optimist” in a Digital Age

Apr 12, 2019

John Cannon & Stacy Baird: Is The Board Ready to Address Disruption? 

May 20, 2019

5 Ways AI is Changing our World for the Better

Jun 25, 2019

Imanol Belausteguigoitia Rius - Why Should Organizations Prioritize Shareholder Welfare Over Profits?

Sep 17, 2019

Michael Ling: How Can a Sustainability Committee Better Look at Potential Risks?

Aug 19, 2019

Carolyn Frantz - Artificial Intelligence Will Change the Workforce

Oct 25, 2018

Bharat Doshi - How Do We Retrain Vulnerable Older Workers for the Jobs of the Future?

Mar 12, 2019

Brave New World: How Can Corporate Governance Adapt?

Mar 18, 2019

Anastassia Lauterbach - What Questions Should Boards Be Asking About AI?

Nov 19, 2018

What Next for Corporate Governance?

Nov 14, 2018

Brave New World: How Can Corporate Governance Adapt?

Oct 04, 2018

Carolyn Frantz - How Could Artificial Intelligence Create New Job Categories and How Can a Company Anticipate These Changes in Workforce Needs and Shape?

Jan 22, 2019

Privacy, Security, and Ethics in an Asymmetric World

Apr 07, 2019