Artificial Intelligence is part of everyday business life. The law is usually slow to adapt and this technology is morphing at lightning speed. This is creating a host of legal issues in the areas of commercial contracts, consumer products, products liability, privacy and data security, intellectual property, bankruptcy, employment, employee benefits, and antitrust.
Artificial intelligence has become an important and widely used tool across a host of technologies and industries. While AI has existed in some form for decades, its scope and application have expanded rapidly in recent years. For example, breakthroughs in generative AI technologies using large language models (LLMs), like ChatGPT, have been made possible by increases in computing power, improved algorithms, and the accessibility of large volumes of data.
This article discusses key issues companies, and their counsel should consider in areas where AI-related technology may be relevant in commercial contracts. For more on AI developments in the US federal and state regulation of AI, including likely trends for the near future, talk to your lawyer.
For more information, please read our blog on AI in the Workplace.
AI Background
AI Defined
While there is no uniform definition of AI, the term generally refers to computer technology with the ability to simulate human intelligence to:
- Analyze data to reach conclusions about it, find patterns, and predict future behavior.
- Learn from data and adapt to perform certain tasks better over time.
At its core, AI is computer software programmed to execute algorithms (sets of code with instructions to perform specific tasks) to, among other things:
- Recognize patterns.
- Reach conclusions.
- Make informed judgments.
- Optimize practices.
- Predict future behavior.
- Automate repetitive functions.
Generative AI refers to type of AI that uses unsupervised learning algorithms to create new digital content based on existing content, such as images, video, audio, text, or computer code.
AI Examples
AI is not a single technology. It exists in many different forms through different functions and applications. Some examples of AI include:
- Natural language processing (NLP), such as the technology used to enable plain English legal research on our legal software Thomson Reuters Westlaw Edge.
- Logical AI/inferencing, which creates decision trees based on user input, such as online tax preparation programs.
- Machine learning, which is AI that learns from its past performance, such as predictive text.
- Artificial neural networks, for instance, used in image recognition technology.
- Machine perception and motion manipulation, for instance, used in industrial robotics.
- Generative AI technologies, such as ChatGPT, that often utilize large language models (LLMs).
These and other fundamental AI technologies can be used to perform various functions, including:
- Content generation, including text, video, audio, and computer code.
- Expertise automation.
- Image recognition and classification.
- Speech-to-text and text-to-speech conversion.
- Text analytics and generation.
- Voice-controlled assistance (like Amazon Echo and Google Home).
- Language translation.
For example, a company may use AI to assist its contracting and procurement processes by:
- Reviewing a large portfolio of existing contracts and categorizing them according to given criteria.
- Extracting data from contracts for analysis.
- Managing renewals and contract updates.
- Identifying specific contracts that are applicable to a due diligence request.
- Maintaining consistency of contractual language, such as defined terms, across a contract portfolio.
- Tracking variations and exceptions in individual contracts, both the company’s and those of its vendors.
- Facilitating the company’s compliance monitoring.
- Contract drafting (subject to professional review).
Legal departments and law firms may also use AI to automate a variety of time-consuming or repetitive tasks, including:
- Document production, including e-discovery.
- The analysis of key law department and law firm metrics and key performance indicators.
- Research and other tasks in support of their companies’ or firms’ compliance efforts.
- Legal research.
- Developing litigation strategy.
- Drafting basic legal documents (subject to professional review).
Advertisers and marketers also may use AI to predict market trends and to make purchasing recommendations to their customers.
Commercial Transactions Involving AI
Organizations using AI in their operations, products, or services can either develop AI capability themselves or license AI from a third party, or a combination of both. Acquiring an AI system from a third party for use in business typically requires a software license, and depending on the nature of the AI product or service may involve agreements for the purchase, lease, or license of:
- Equipment.
- Services.
- Data.
When AI is a key component of the transaction, certain aspects of these agreements may raise unique negotiation issues, including:
- Risk allocation provisions, such as:
- representations and warranties;
- indemnification; and
- limitations of liability.
- Use and protection of data.
Risk Allocation
Representations and Warranties
In licensing an AI system, service, or AI-enabled product the parties must carefully consider the vendor’s representations and warranties. Customers typically incorporate AI into mission-critical functions that justify the costs of implementing and integrating the AI into the customer’s business, such as by:
- Automating production lines.
- Streamlining supply chain and logistics processes.
- Making product marketing decisions based on data analysis.
The vendor’s representations and warranties about the AI system’s performance and output must adequately address the potential business impact of a system failure. For physical equipment with embedded AI, the representations and warranties must also cover injuries and other damages caused by AI-enabled machines and devices.
The customer should pay close attention to the non-infringement warranty. Typically, a vendor of software or software-based services will represent that it owns or otherwise has sufficient rights to allow the customer to use or benefit from the software. An AI system may itself produce infringing code when performing its functions. Infringement liability in this situation is unclear, so the parties must ensure that all parties understand how the representations and warranties allocate liability.
Indemnification
Indemnification provisions seek to allocate liability to the party with greater culpability for, or the best opportunity to prevent, the event that results in liability. As with infringement, when an AI system’s decision-making process results in a liability, it may be difficult to determine whether the provider of the AI or its user caused the event giving rise to liability. The parties to an AI system or services agreement should carefully consider how to allocate liability for the AI’s functionality.
Limitations of Liability
Limitation of liability provisions are a mechanism by which the parties agree to put an upper limit on their potential liability under a contract. The primary goal of a liability cap is to ensure that a party is not subject to unlimited liability, without depriving an aggrieved party of adequate recourse for any breaches or related liability. In many agreements, the vendor is more concerned about unlimited liability, because the customer’s primary obligation is to pay for the services or products, so the bulk of its potential liability is inherently limited by the amount of its payment.
In a typical limitation of liability provision:
- Some liabilities, such as lost sales, are excluded entirely and are therefore unavailable to an aggrieved party.
- Some liabilities, such as for death, personal injury, or fraud, are not subject to any limitations.
- Remaining liabilities are capped at either a fixed dollar amount or a calculated amount, often based on annual or annualized fees for the service.
The liability cap should be high enough to dissuade a party from breaching the contract, but not high enough to give an aggrieved party an unreasonable windfall in the event of a breach. In an AI system, the risks of a system or equipment failure can be catastrophic. For example, if a production line operated by AI-enabled robots malfunctions or shuts down, the cascading effects can devastate the company’s business. If an AI data analytics system inadvertently discloses personal information of downstream users, the customer will face third-party data breach claims from those users and sustain serious reputation damage. The negotiations over liability limitations therefore deserve careful attention.
Insurance
Parties to commercial contracts use insurance as a mechanism for limiting their liability exposure by shifting risk to an insurer. Insurance also provides the other party some assurance that the insured party will be able to meet its indemnification and other obligations. Most product manufacturers and sellers carry various forms of insurance coverage, including:
- Commercial general liability (CGL) insurance.
- Cyber insurance.
- Errors and omissions (E&O) coverage.
- Business interruption coverage.
- Directors and officers (D&O) insurance.
There are many possible ways AI systems can fail or cause damage. All insured parties must therefore determine which coverage, if any, applies to any given situation, and if not whether they can expand or add coverage or if coverage is even available. The insurance industry is evolving to address AI-related risks.
Data
AI systems rely on large quantities of data. An AI system with more data access can, among other things, recognize more sophisticated and detailed patterns and make more informed predictions. AI-based services agreements therefore often include provisions to allow the vendor to accumulate and aggregate each customer’s data with data from its other customers. All the vendor’s customers benefit from a larger universe of data, but customers are often reluctant to authorize the vendor to reuse their data for the benefit of its other customers, some of whom may be competitors.
Vendors and service providers should be prepared to negotiate data ownership and use terms allowing for reuse of customer data to enhance the AI services, typically with limitations regarding:
- Aggregation with other customers’ data.
- Anonymization so that no data can be identified to a specific customer.
- Protection of the confidentiality of the customer’s data.
If the services allow the customer to access the aggregated data of the vendor’s entire customer base, the customer should expect to agree to the same access restrictions it wants to impose on the vendor’s other customers.
If the data in question includes personal data, such as purchasing history or health care related data of the customer’s downstream consumer or patient base, then the parties may need to include additional protections and restrictions to comply with privacy laws and regulations.
Consumer Products and AI
AI has proliferated in the consumer products market with the growth of the Internet of Things (IoT). IoT devices are “smart” consumer products that incorporate AI and internet connectivity into their functionality. Examples include:
- Amazon’s Echo with Alexa.
- Apple products with Siri.
- “Smart home” devices that control functions like:
- temperature;
- lighting; and
- home security.
These devices include a physical product, software, and internet or cloud connectivity, and raise several issues, including:
- Whether the device is a product, a service, or a hybrid of both. This may determine whether and to what extent product liability laws apply.
- How the device manufacturer and the AI software developer should allocate warranty liability.
- How to comply with applicable regulations.
- How different forms of insurance coverage are affected.
Product, Service, or Both
It is not always clear whether a given IoT device is a product, a service, or both. As a product, and possibly as a hybrid of product and service, an IoT device may be subject to the same product liability legal standards as any other product. These standards may include the potential for the manufacturer to face strict liability for damages caused by a defect or flaw in the device. If, however, the IoT device is treated as a service, it is unclear if courts would consider the AI to be subject to the same legal standards.
Product Warranties
It may not be clear who in the product development stream is responsible for product warranties. The software developer who created the incorporated AI may be liable to the end user under express or implied warranties. Alternatively, the device manufacturer may bear sole warranty liability to the end user and have to seek indemnification from the software developer under their own agreement.
Regulation
Because of the uncertain nature of IoT devices, it is not clear whether the Consumer Product Safety Commission (CPSC), the Federal Trade Commission (FTC), the Federal Communications Commission (FCC), or some other entity should be responsible for monitoring and regulating IoT devices.
Insurance
Insurance considerations apply at the consumer or homeowner level as well as the manufacturer or seller level.
Consumers may use:
- Self-driving cars, potentially implicating automobile insurance (covering injury, death, and property damage liability).
- IoT devices, which homeowners insurance may address.
Insurance companies and consumers should carefully review existing policies to determine if any given incident is covered.
Conclusion
AI is certainly creating a host of legal issues for business along with great opportunities. This article just touches the surface of what we are seeing. People regularly ask me about AI and the law firm and whether I think a robot will replace a lawyer. Perhaps someday, my robot will read this and respond about this answer. At present, I see AI creating a lot more issues for clients and their counsel to address. Applying the law to the situation and knowing what tools to use and how to use them are currently in the lawyers domain. While I cannot predict the future, I’ve always thought as long as there are humans with human nature, we will need lawyers.