On April 28, 2022, Covington convened experts across our practice groups for the Covington Robotics Forum, which explored recent developments and forecasts relevant to industries affected by robotics. One segment of the Robotics Forum covered risks of automation and AI, highlights of which are captured here. A full recording of the Robotics Forum is available here until May 31, 2022.
As AI and robotics technologies mature, the use-cases are expected to grow in increasingly complex areas and to pose new risks. Because lawsuits have settled prior to a court deciding liability questions, no settled case law yet exists to identify where the liability rests between robotics engineers, AI designers, and manufacturers. Scholars and researchers have proposed addressing these issues through products liability and discrimination doctrines, including the creation of new legal remedies specific to AI technology and particular use-cases, such as self-driving cars. Proposed approaches for liability through existing doctrines have included:
- Strict Liability Approach – Manufacturer Liability
- Courts could apply the “consumer expectations” test where manufacturers would be responsible for defects in design or software that create unreasonably dangerous conditions. Under this approach, there would be no need to show a reasonable alternative design. Some argue that this approach would dampen innovation.
- Negligence Approach
- Courts could apply the “risk-utility” test where plaintiffs must show that adopting a reasonable design alternative could have reduced the foreseeable risks of harm the product posed. Courts also could perform a cost-benefit analysis that balances the cost to the manufacturer for an alternative design in relation to the amount of harm reduced.
- Breach of Warranty Approach
- Commercial remedies could apply to robotics-related accidents. The Uniform Commercial Code (“UCC”) governs many aspects of product warranties and commercial transactions, and some have argued that it also could govern robotics liability. Express warranties are created when a seller promises something to a prospective buyer in association with the sale of goods.
- Multiple Actor – Joint Liability
- Under this approach, various parties involved in the design and use of a robotics product could be held liable for harms associated with the product’s performance or malfunction. Such an approach could prove particularly challenging for complex technologies, such as self-driving cars.
Stakeholders also must be mindful of how human bias can affect robotics and AI. Bias in AI can be created via statistical bias where an algorithm produces results are not representative of the true population or social bias where an algorithm treats groups unequally within a system. There are a number of data practices that can result in AI bias, such as: (1) relying on past biased data in a machine learning algorithm; (2) collecting data for use in AI that is non-representative or not impartial; (3) making broad generalizations with respect to data inputs or results; (4) relying on factors that become a proxy for protected classes based on correlations in society; and (5) using the neutral face of AI to mask intentional discrimination. The good news is that companies can proactively remedy potential bias or discrimination by avoiding these pitfalls, testing algorithms on diverse population sets, and following evolving legal developments and best practices.
We will provide additional updates about the 2022 Covington Robotics Forum and other developments related to robotics on our blog. To learn more about our commercial litigation work, please visit the Commercial Litigation page of our web site. For more information on developments related to AI, IOT, connected and autonomous vehicles, and data privacy, please visit our AI Toolkit and our Internet of Things, Connected and Autonomous Vehicles and Data Privacy and Cybersecurity websites.