Anastassia Lauterbach - What Questions Should Boards Be Asking About AI?

Search

Loading...

News

Latest News

Anastassia Lauterbach - What Questions Should Boards Be Asking About AI?

Tech strategist opens a new online discussion series – the Salzburg Questions for Corporate Governance â€“ by posing questions for boards about the potentially unintended ethical consequences of artificial intelligence

Anastassia Lauterbach at the 2018 program of the Salzburg Global Corporate Governance Forum

This article is the first in the series, the Salzburg Questions for Corporate Governance by the Salzburg Global Corporate Governance Forum. Join in the discussion on LinkedIn

From addressing bias in translation and computer vision models to the use of Machine Learning (ML) in HR, businesses and policymakers are increasingly looking at artificial intelligence through the lens of ethics and risk management. ML models scale everything with the brute force of mathematics, so corporate entities need to think twice before applying these models to augment jobs, change-- how they treat particular customer segments, and/or hire new employees.

In the past two years we have already seen examples of coding errors, lack of thoughtfulness with regard to what components should be considered within a model, or refusal to vet a company’s monetization model to ensure that company’s integrity and reduce reputational risk (just think of Facebook and its role in the US election in 2016).

Besides, different geographies and industries don’t apply the same thinking around what is ethical, what should be mitigated as a risk, and what should be left untouched to ensure competitive advantage.

Corporate boards should get involved in discussions on ethics, risk and AI on a more structured basis. There are several sets of questions to look at, each touching a different risk of current ML models.

Do we have a good data governance policy in place?

ML models learn from data. ML techniques are not mutually exclusive and can be leveraged in different combinations, depending on the task and the available dataset. In this context that a visionary board should ask how the company thinks about data to solve strategic and operational problems, whether there is a solid data governance framework in place, and if and when the business considers providing wide access to data, allowing as many people as possible to find valuable insights.

A policy to invest in and develop robust datasets will allow for fewer conflicts within a business. Conflicts can result from different views on how to measure or interpret the data, what kind of algorithms to apply, and at what point in time a company requires outside expertise.

Strong data governance practices enable data sharing, which then enables innovation. To be most effective, data governance needs to be embedded in an organization’s culture to become more than a system of tactics to derive business value. If this happens, data governance is likely to influence organizational behavior. Data governance frameworks should be at the top of every corporate board agenda, as they enable a company to move from piloting data technologies to mass scale deployment, and influence organizational hierarchies and culture of an enterprise.

What biases might be present in the data collection and use and how can we counter them?

There are implicit biases in the values that determine which datasets we use to train a computer. For example, if an ML human resources application for finding the best person to fill a job includes a feature that it is “someone who stays for years and gets promotions,” this will almost always yield male candidates. In autumn 2017, one ML system tasked with identifying professions in images came to the famous conclusion: women like shopping. There is a widely known example of a Google ML engine in photo recognition – dark skinned faces were associated with gorillas. Julia Angwin studied bias in law enforcement and criminal justice, identifying Northpointe’s racially-biased Compas system, which was used to sentence people across the United States. Bloomberg reported that Amazon’s same-day delivery was bypassing ZIP codes with a predominantly African-American population. If cancer-spotting AI algorithms are only trained on light-skinned people, people with darker skin will have a lower survival rate. These painful cases were caused by either a lack of diversity in teams training AI models, building data sets, or skilled in paying attention to contextual circumstances.

If diverse teams do the coding, work bias in data and algorithms can be mitigated. If AI is to touch on all aspects of human life, its designers should ensure it represents all kinds of people. The values of the engineers building AI will be reflected in the solutions they come up with. The boards should question diversity of coding teams.

Companies like Google, Facebook or Twitter operate in the so-called attention economy, where brands compete for eyeballs and allocate their advertising dollars to the most successful “attention” marketplaces. Product design and lifecycle management are focused on attention-generation, so called “stickiness” of products and services to keep a user attached to them. Bias in data sets and algorithms can be found in the fields with the highest monetization potential. Context, nuances and niche users are often disregarded. A board of such a business should not shy away from asking hard questions, because the structure of the monetization model at the company did not prioritize privacy, fairness and the personal preferences of consumers.

Last but not least, AI models have contributed to the rise of content to deceive people. The startup AI Foundation raised $10m earlier this month to develop an AI system called Reality Defender. The system uses machine learning to identify fake news and malicious digital content meant to deceive people online. As these kinds of offerings become more widespread, AI companies seeking to monitor content on the internet will have to prove that they are doing so ethically. Boards of content aggregation companies and media businesses need to ask for frameworks and working samples to ensure their companies target the problem of “fake news,” trying to reduce the risk. If efforts fail, boards need to insist on transparency and a clear communication of what happened. As the very recent controversy around Facebook’s top management shows, keeping silent is not an option.

How can we ensure data transparency and avoid “black boxes”?

There are product liability, rights and liberty and governance related issues to keep in mind when using “deep learning” models. When a neural net determines the respective weights for different features within a model, we do not know why it did so. This can be dangerous for specific uses that could impact individuals and society, such as in healthcare, finance, law enforcement, or education.

The AI Now Institute recommends abolishing the use of unvalidated and pre-trained black box models in any core public agencies, such as criminal justice, healthcare, welfare and education.

Companies and researchers are working to overcome the black box problem. The MIT Technology Review reports that the neural network architecture developed by AI tech company NVIDIA’s researchers is designed to highlight those areas of a video picture that contributes most to the behavior of a car’s deep neural network.

Jeff Clune at the University of Wyoming and Carlos Guestrin at the University of Washington (and Apple) have found ways of highlighting the parts of images that classification systems are picking up on. Tommi Jaakkola and Regina Barzilay at MIT are developing ways to provide snippets of text that help explain a conclusion drawn from large quantities of written data. DARPA, which does long-term research for the US military, is funding several research projects through a program called Explainable AI (XAI). XAI will be without doubt a prominent next generation field of research and funding. A question will remain, whether the companies with the largest datasets and the biggest AI talent pool will benefit most from this research, and therewith continue monopolizing AI markets.

A visionary board can ask questions on where deep learning models are applied in product design, or introduced by vendors and whether there are efforts in place to understand how such models come to their conclusion.

Are we vulnerable to cyber-attacks?

AI poses unique cybersecurity issues because machines are being used to train other machines, thus scaling the exposure of compromised pieces of code. AI algorithms can contain bugs, biases, or even malware that are hard to detect, such as the DDoS attack in October 2016 that affected several hundred thousand devices. Like any technology, AI can also be used by criminal groups. Understanding their motives and techniques is important to prevent attacks, and to detect them in a timely manner. As an example, a group of computer scientists from Cyxtera Technologies, a cybersecurity firm based in Florida has built the Machine Learning system DeepPhish that generates phishing URLs that cannot be detected by security algorithms. The system was trained using actual phishing websites.

The proliferation of Machine Learning solutions for cybersecurity comes with certain risks, if AI practitioners are rushing to bring a system online. It means that some of the training data might not be thoroughly scrubbed of anomalies, causing an algorithm to miss an attack. Experienced hackers can also switch the labels on code that has been tagged as malware. A diverse set of algorithms rather than a dependency on one single master algorithm might be a way to mitigate this risk, so that if an algorithm is compromised, the results from the others can still show the anomaly.

As the amount of data increases, adversarial AI is being used to hack AI systems.

For example, a study at the Harvard Medical School revealed that AI systems that analyze medical images are vulnerable to covert attacks. The study tested deep learning systems that analyze retina, chest, and skin images for diseases. Researchers presented “adversarial examples” and found that it was possible to change the images in a way that affected the results and was imperceptible to humans, meaning that the systems are vulnerable to fraud and/or attack.

Corporate boards get more and more engaged in cybersecurity risk oversight, since it affects the company’s reputation. Their chief information security officers (CISOs) should provide insights on how they use ML in mitigating their risks. At the same time the board needs to know the executives who are part of a broad network of companies thinking about how to prevent adversarial attacks, who have insights into the vendor landscape focused on solving this problem.

How can and should we use AI to manage our workforce? How can jobs be upgraded instead of replaced?

The Mizuho Financial Group in Japan says it will use AI to replace 19,000 people by 2027 — about a third of its workforce. There is a growing worry in fintech that inherent bias in code could be baked into algorithms used to assess credit risk, whereby creditworthy customers could be denied credit based on race, gender, religion, and other factors.

The acceptance of AI is seriously jeopardized when executives fail to explain its benefits to employees. Instead of replacing people, AI will augment their jobs and create new ones. Repetitive tasks can be eliminated, and new tasks will arise that require good human judgement and domain expertise. For example, fraud detection applications will reduce the time people spend looking for anomalies yet increase their ability to decide what to do about deviations.

Companies that view AI purely as a cost-cutting opportunity are likely to deploy ML in all the wrong places, and in a compromised way. These companies will automate the status quo, rather than imagine a better world. They will cut jobs instead of upgrading roles.

The board needs to get a clear picture of how corporate management thinks about shifts within their employment base, what training strategies are in place to increase workforce competitiveness, and what social instruments are in place to address those left behind. Changes in employment usually happen gradually, often without a sharp transition. Boards should insist on a sound discussion about the future of the workforce while there is still time to design inclusive and forward-looking practices. This includes clarity around the future of qualification for the entry jobs, models of part-time employment, access to expert freelancers and researchers, and what parts of the existing workforce is needed for training of AI systems, e.g. data preparation and pre-processing. Considering good employment practices and providing good future to today’s and future generations of employees should not be missed on a board agenda. Ultimately, this is a question of business sustainability.

Are we complying with regulations?

Overreacting to accidents e.g. concerning autonomous vehicles might bring more problems than it solves. In March 2018, an Uber vehicle in autonomous mode hit and killed a woman crossing a street in Tempe, Arizona — the first fatal accident involving an autonomous vehicle and a pedestrian. Uber immediately suspended all its self-drive pilots, resuming them only in mid-July. Such accidents around a new technology have been always a negative side of progress. Just remember the French philosopher Paul Virilio, who famously talked about technological development being tightly linked to the idea of the accident. If you invent the plane, you also invent the plane crash. “The ethical concerns of innovation thus tend to focus on harm’s minimization and mitigation, not the absence of harm altogether.”

Regulatory compliance goes hand in hand with transparency. AI technology evolves. In time we will see how deep learning models make their decisions. We might, however, never resolve the old trolley problem. We might however agree, that designing models without ethics and governance in mind will not create lack of ethics or governance. It will create bad ethics and governance.

What industry-specific questions are there?

Healthcare

Mindshare Medical is launching AI tools to diagnose cancer using imaging data that is invisible to the human eye. RevealAI-Lung, their product, was cleared for use in Canadian hospitals to assist with lung cancer screening.

Danish AI company, Corti, has developed a Machine Learning system that determines whether a victim is in cardiac arrest based on emergency calls. Corti’s system analyzes the words the caller uses, the tone of the caller’s voice, and any background noises on the line. The software correctly detected cardiac arrest in 93 percent of cases vs. the 73 percent success rate for human dispatchers. The system is being used in Copenhagen and is being pilot-tested in five other European countries this fall.

In June 2018, Babylon Health announced that an AI algorithm scored higher than humans on a written test used to certify physicians in the United Kingdom. The Royal College of General Practitioners, a healthcare industry body representing doctors, protested the idea that we should trust AI with our health.

Someday soon, doctors will have to weigh the ethical consequences of an AI-driven misdiagnosis, asking who will take responsibility: the doctor, or the machine?

Notably, the FDA recently signaled that it is taking a fast-track approval strategy for AI-based medical devices.

Corporate boards should be aware about regulatory trends, litigation and major product announcement in their industry, and have access to experts and leading lawyers providing transparency and encouraging discussions around possible scenarios.

Defense

The Pentagon currently has 600 AI related initiatives, with 50 of those linked to so called “killer robots.” The Google controversy around contributing to the military with computer vision systems allowing for such applications is widely known, leading to the retirement of Fei Fei Lee, Google Cloud’s Chief Science Officer. The discussion on ethics should be led without hesitation. It should be considered, however, that access to military technology and research is not limited to companies with ethics in mind. In China all internet players have labs open to developing and testing military products.

Technology companies will hopefully grapple with ethical questions as they sell products and services to the military and intelligence community. Amazon, for example, is possibly one of the most important defense contractors in the US.

Amazon Web Services (AWS) has a contract with the US government called Secret Region, making AWS the first and only commercial cloud provider to serve workloads across the full range of government data classifications, including Unclassified, Sensitive, Secret, and Top Secret.

Boards deserve full transparency on such initiatives and an active part in discussions with engineering groups and the management designing and implementing defense systems. They should be aware about how their company thinks about allowing robots and software to determine the outcome of an armed conflict.

As an example, GoodAI specializes in training AI to reason and act ethically. This implies reacting to situations the machine previously did not encounter. This is not a trivial task. GoodAI polices the acquisition of values by providing a digital mentor, and then slowly ramps up the complexity of situations in which the AI must make decisions. The company is working on robots that might be used even in a military context.

GoodAI is just one of the organizations dedicated to understanding the ethical dimension of robotics and AI that have evolved across the world in recent years, e.g. the Foundation for Responsible Robotics, the Global Initiative on Ethical Autonomous and Intelligent Systems, and the Future of Life Institute, which published the Asilomar AI Principles, developed in conjunction with the 2017 conference.

A good board will ensure their company actively participates in discussing ethical standards for autonomous systems, that it donates money to nonprofits with a similar mission and actively communicate to the groups of employees arguing against engagement with military and defense sectors.

What geography-specific questions are there?

Corporate boards understand that certain geographies do not apply the same ethical considerations when it comes to surveillance, freedom of speech, and freedom of movement.

Google is working on a project called Dragonfly, a censored search engine in China. The search engine could reportedly fully block certain results for searches such as “freedom of information” or “peaceful protest.” Google employees signed a letter protesting the work, stating:

“[Project Dragonfly] raises urgent moral and ethical issues… Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.”

Google decided to hire an investigations analyst on its Trust and Safety team to assess the company’s ethical machine learning practices.

Boards will be increasingly concerned with calls for a compromise when it comes to doing business in China and several other geographies. Ethical views on what is right diverge in these regions from what businesses are used to in North America and Western Europe. We require a broader discussion with investors, who will in turn have to address questions on ethics, reputation and sustainability.

Have an opinion? 

We encourage readers to share your comments by joining in the discussion on LinkedIn


Anastassia Lauterbach is a Fellow of Salzburg Global Seminar, having attended the 2018 program of the Salzburg Global Corporate Governance Forum. Dr. Lauterbach is the director of Dun & Bradstreet and the chief executive officer and founder of 1AU-Ventures, where she advises U.S. and Europe-based artificial intelligence and cybersecurity companies and investment funds. She also serves on the board of Wirecard AG (German DAX), and she is chairwoman of Censhare AG. Sheis a senior advisor for artificial intelligence at McKinsey & Company. Previously, she served as senior vice president Europe at Qualcomm, senior vice president of business development and investments at Deutsche Telekom AG, where she also served as a member of the executive board, and executive vice president of group strategy at T-Mobile International AG. In April 2018, Dr. Lauterbach published her book, Artificial Intelligence Imperative: A Roadmap for Businesses. She has a Ph.D. in linguistics and psychology from the Rheinisch Friedrich-Wilhelms Universität Bonn, Germany and a diploma in linguistics from the State Lomonosov University, Moscow, Russia.


The Salzburg Questions for Corporate Governance is an online discussion series introduced and led by Fellows of the Salzburg Global Corporate Governance Forum. The articles and comments represent opinions of the authors and commenters, and do not necessarily represent the views of their corporations or institutions, nor of Salzburg Global Seminar. Readers are welcome to address any questions about this series to Forum Director, Charles E. Ehrlich: cehrlich@salzburgglobal.org To receive a notification of when the next article is published, follow Salzburg Global Seminar on LinkedIn or sign up for email notifications here: www.salzburgglobal.org/go/corpgov/newsletter

Related Content

Jeffrey D. Grant - What Are the Board's Key Roles and Responsibilities When Facing Existential Threats to Their Company?

Apr 13, 2021

John Cannon & Stacy Baird - Is The Board Ready to Address Disruption? 

May 20, 2019

Imanol Belausteguigoitia Rius - Why Should Organizations Prioritize Shareholder Welfare Over Profits?

Sep 17, 2019

Michael Ling - How Can a Sustainability Committee Better Look at Potential Risks?

Aug 19, 2019

Robert H. Mundheim - What is the Significance of the Business Roundtable Statement on the Purpose of a Corporation?

Oct 22, 2019

Barak Orbach - Do Directors and Officers Have a Duty to Monitor Corporate Culture?

Dec 14, 2019

Pamela S. Passman - How Can Boards Provide Oversight on Corporate Culture?

Jan 14, 2020

Dottie Schindlinger - What Can Boards Do to Create Structure and Process Around Innovation?

Feb 10, 2020

Anastassia Lauterbach - Why Must Corporate Boards Discuss Innovation?

Feb 25, 2020

What Does It Take For A Business To Survive An Existential Threat?

Dec 15, 2020

Shared Prosperity: What Is The Role Of The Compensation Committee In Addressing Income Inequality?

Dec 17, 2020

Are Companies Prepared To Handle The Converging Risks Of COVID-19 And Climate Change?

Dec 20, 2020

How Should Boards Be Addressing Black Lives Matter And Broader Issues Of Systemic Racial Inequality?

Dec 20, 2020

Being a “Tech Optimist” in a Digital Age

Apr 12, 2019

Shaping a Better Future with Breakthrough Technologies

Apr 11, 2019

Brave New World: How Can Corporate Governance Adapt?

Mar 18, 2019

Bharat Doshi - How Do We Retrain Vulnerable Older Workers for the Jobs of the Future?

Mar 12, 2019

Stacy Baird - Europe’s Privacy Law - A Barrier to Artificial Intelligence or an Enabler?

Dec 18, 2018

Kevin McCarthy - Diversity in the Workforce Makes Companies Better

Nov 23, 2018

What Next for Corporate Governance?

Nov 14, 2018

Brave New World: How Can Corporate Governance Adapt?

Oct 04, 2018

Anastassia Lauterbach – Artificial Intelligence is a Huge Growth Driver

Nov 08, 2018

Brave New World - How Can Corporate Governance Adapt?

Oct 04, 2018

Corporate Leaders and Executives Reflect on New Skills Needed for a “Brave New World”

Oct 09, 2018

Duy-Loan Le - Boards Have a Responsibility to Ask the Right Questions

Oct 25, 2018

Carolyn Frantz - Artificial Intelligence Will Change the Workforce

Oct 25, 2018

Carolyn Frantz - How Could Artificial Intelligence Create New Job Categories and How Can a Company Anticipate These Changes in Workforce Needs and Shape?

Jan 22, 2019

Privacy, Security, and Ethics in an Asymmetric World

Apr 07, 2019