When Microsoft shared in 2018 that certain “deficiencies” surrounding its artificial intelligence practices could “subject[] us to competitive harm, legal liability, and brand or reputational harm,” it set the stage for other companies to self-report perceived risks associated with developing, using and producing AI technologies. Disclosing risks–a Securities and Exchange Commission (SEC) requirement imposed on public companies since 2005–raises a number of important considerations for both public as well as private companies, including transparency, accuracy, and the degree of speculation that may be acceptable when discussing AI impacts. Although companies in industry segments other than AI face similar concerns, AI technologies present unique risk assessment and disclosure challenges due in large part to the learned nature of their black box underpinnings and also their potential for unforeseen or unintended uses.
In the absence of clear regulations, guidance, and court decisions expressing best practices for risk assessments and disclosures for AI companies, risks tend to shift to investors and public consumers of AI technologies, who are much less able to perceive potential risks than the companies that create them. A new proposed SEC rule may change that. Below is a discussion of ways a new rule, properly emphasizing “material” risks, could improve risk assessments conducted by AI technology companies and their risk disclosures.
Risk Factor Disclosure Requirements
Providing qualitative risk factor disclosures in SEC filings is intended to help investors make informed decisions about a company’s investment offerings. But public risk disclosures are used by more than stock investors. The media, for instance, often cite from financial results documents, including company statements concerning risk. As a result, companies that make risk disclosures are careful about what they say, and they tend not to go beyond basic regulatory requirements.
Currently, regulations require companies to provide a concise discussion of the “most significant” risk factors that are unique to them; that is, risks that would not generally apply to any stock issuer or any offering. The discussion is supposed to be “accurate and candid.” According to the SEC, companies should not just mention risks, but disclose the specific facts and circumstances that make a given risk material to the company. A company is not required to present risk factors for risks that it is unaware of. Similar risk disclosure requirements may be imposed on private companies by private equity investors during their due diligence investigations and for other reasons.
Failing to comply with applicable disclosure requirements, including being opaque or ignoring risks, can expose a company to shareholder and other lawsuits, as well as bad publicity. Even so, many company risk factor disclosures lack specifics and are unclear. A 2016 analysis by Ernst & Young found that,
“Investors frequently say risk factors sections [of 10-K annual submissions] are overly generic and confusing. Indeed, a recent EY/IIRCi review of risk factors disclosures of 50 large companies, The ‘Corporate Risk Factor Disclosure Landscape,’ found these disclosures often do not provide clear, concise and insightful information. The review found disclosures typically are not tailored to the specific company. Instead, they tend to represent a listing of generic risks, with little to help investors distinguish between the relative importance of each risk to the company. In addition, many companies use language that is repetitive and laden with legal and compliance terminology where plain English could better help investors understand and evaluate company-specific risks.”
In Microsoft’s 2018 annual report, the company stated that AI algorithms in general may be flawed, datasets may be insufficient or contain biased information, inappropriate or controversial data practices could impair the acceptance of AI solutions, and “[s]ome AI scenarios present ethical issues.” The company went on to surmise that, “If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”
Those statements reflected the general sentiment of many scholars, lawmakers, civic organizations, and members of the public about AI’s benefits and harms in 2018. But they raise unanswered questions. For instance, which algorithms were potentially flawed and why? What data sets were used to train those algorithms and where did they come from? What specific ethical issue scenarios were confronted and how did they arise? And how do any of those issues materially affect the company’s financial prospects? Those questions may be difficult to answer at least because, as noted above, the very nature of AI-based technologies makes it difficult to foresee and assess risk, and thus performing qualitative risk analysis may be difficult for some AI companies.
Google’s parent, Alphabet, addressed in its 2019 annual report to the SEC certain AI-related risk factors. “[N]ew products and services,” it wrote, “including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results. Our operating results may also suffer if our innovations are not responsive to the needs of our users, advertisers, customers, and content providers; are not appropriately timed with market opportunities; or are not effectively brought to market.”
Manpower Group, Inc., described its AI risk factors from the perspective of workforce impacts caused by AI. In its 2019 SEC submission, the company wrote, “Our success depends on our ability to keep pace with rapid technological changes in the development and implementation of our services and solutions. For example, rapid changes in the use of artificial intelligence and robotics are having a significant impact on some of the industries we serve, and could have significant and unforeseen consequences for the workforce services industry and for our business. There is a risk that these, or other developments, could result in significant disruption to our business model, and that we will be unprepared to compete effectively.”
Those examples illustrate common techniques of categorization and cause-and-effect in communicating risk. Expressing risk categorically can, however, lack concreteness. Similarly, cause-and-effect statements can lack specific facts and circumstances that link actions and results, leaving readers to speculate. Blanket assertions, such as “significant impact,” are uninformative without quantified impacts. In fact, “significant” in risk disclosures can confuse readers if the term is used in contexts outside statistics (where statistical significance has a precise meaning tied to computational methods). Other prolific industry terms, like “disruption,” tend to be overused and often vague.
The above few examples illustrate the difficulty companies face when trying to satisfy Regulation S-K in the modern era of AI technologies. While some risks associated with developing and using AI technologies have been foreseeable, such as deployment of robot process automation displacing skilled workers, other risks have not been as easily predicted. Take, for example, the adaption of generative adversarial networks (GANs) and auto-encoders to the creation of so-called deepfake videos, the judicial courts use of automated decision systems that have had disproportionate impacts, and the deployment of conversational chatbots that went rogue when they were launched. In some situations, only after an AI technology has been deployed are its adverse impacts, and thus its inherent risks, observed.
Changing the Emphasis to Materiality
In response to concerns, the SEC announced on August 8, 2019, that it would propose rule amendments to modernize risk factor disclosures that companies are required to make pursuant to Regulation S-K. The proposed amendments, the SEC says, are intended to update the rules to improve disclosures for investors and to simplify compliance efforts for registrants and “elicit more relevant disclosures.” Specifically, the proposed rule changes would “require summary risk factor disclosure if the risk factor section exceeds 15 pages; refine the principles-based approach of [the] rule by changing the disclosure standard from the ‘most significant’ factors to the ‘material’ factors required to be disclosed; and; require risk factors to be organized under relevant headings, with any risk factors that may generally apply to an investment in securities disclosed at the end of the risk factor section under a separate caption.”
The key change to the existing rule is the new emphasis on materiality. If the test for materiality is to be criteria-based, what AI-specific criteria (numerical or other) should companies use to decide if a particular AI-related risk needs to be disclosed and discussed? To avoid a one-size-fits-all regulation, companies may need to use criteria unique to their business and industry. Regulators and stakeholders may have different opinions about whether those companies need to be transparent about their criteria.
For example, a significant change in a training dataset used to develop a machine learning model can have a measurable effect on a deployed model’s ability to make relevant decisions. If the measurable change is above a threshold criteria, the obligation of transparency might require the company to make a specific disclosure of the source of the new data, what percentage of the dataset is new, and what are the expected impacts on the model’s output attributed to the new data. Such disclosures could include a robust description rather than a cursory cause-and-effect “if-then” statement, but in doing so might reveal proprietary or sensitive company information.
Changes to the SEC’s disclosure rule could also address how AI-related risks, once identified as being material, are communicated. Although identifying risks categorically can help structure risk factors disclosures, categories themselves may not offer much meaningful takeaway for investors and the public.
Consider, for example, “reputational harm” as a risk category. It is well established that a tech company’s decision to offer an AI technology in a specific market can negatively affect its reputation in that market and beyond if something unexpected happens. Explaining how the technology can affect a company’s reputation could add clarity. How does the technology’s collection of personal biometric or behavioral data make it controversial? How could customers who buy the technology use it in a way that is perceived as counter to a set of company and personal values? How could the stated accuracy and utility of the AI technology fail to meet the company’s advertising and promotion of products? If a risk is deemed material, a company’s ability to effectively communicate how its reputation has been harmed or will be harmed in the future (avoiding speculation as much as possible) may make the difference between adding clarity or compounding confusion.
A focus on materiality could also result in fewer generic and speculative risk discussions. In theory, applying criteria to assess materiality requires at least some degree of specificity in identifying risks unique to a company’s operations (and risks that have external impacts). Once those specifics are better understood, they can be communicated to investors and the public. Moreover, rather than being just another item checked during preparation of SEC filings, risk assessment and disclosure can improve overall risk management across an enterprise and help companies plan better.
Although the quarterly and annual filing requirements under current SEC regulations generally require regular risk assessment updates, such assessments are often based on hind-sight following adverse events. A criteria-based materiality approach, in contrast, may provide companies the ability to perform qualitative risk assessment using a constant feedback model. Assuming the right data are being collected, robust monitoring for indications and warnings of events related to a company’s AI products and services could provide useful feedback that may be compared to risk criteria and help identify problems early, as well as provide useful information about the criteria themselves to better assess the downstream positive benefits and negative effects of an AI product or service.
Often, companies receive feedback, both positive and negative, from customers, watchdog groups, and others, about their products and services. The federal Food and Drug Administration’s (FDA) Adverse Event Reporting System (FAERS) provides a means for consumers, healthcare professionals, and manufacturers to submit reports to the FDA concerning problems with drugs after they have been launched. Under existing FDA rules, if a manufacturer receives a report from a healthcare professional or consumer, it is required to send the report to the FDA. A similar mechanism might be useful to the SEC and companies who have to monitor for risk concerns.
Company qualitative risk assessments regarding AI technologies, when properly conducted and reported, help investors and the public. Identifying risk from the perspective of materialilty rather than what is perceived to be the “most significant” may help companies gain insight into potential risks, improve risk disclosures, and reveal areas of concern needing enhanced regulatory oversight.