Covering Scientific & Technical AI | Monday, January 20, 2025

AI Risks in Business: Why Strong Contracts Are a Must 

Companies integrating AI into their operations face some serious challenges that can’t be ignored if they want to avoid unnecessary risks. AI introduces complexities that traditional IT agreements weren’t built to handle, and failing to account for these issues can lead to major consequences.

One of the biggest concerns is how AI uses data. Many AI models don’t just process inputs—they learn from them. That means any information fed into an AI system could end up being used for training, sometimes even resurfacing in responses to other users. If businesses don’t have clear terms on how their data is handled, stored, and protected, they could unintentionally expose proprietary or sensitive information.

Then there’s bias. AI doesn’t think for itself—it relies on the data it’s trained on. If that data contains bias (and most data sets do), the AI’s outputs will reflect it. This can create serious issues, especially for businesses using AI in decision-making. Without a clear plan for monitoring and mitigating bias, companies could end up with results that are inaccurate, discriminatory, or legally questionable. 

Security is another growing concern. AI isn’t just being used to prevent cyberattacks—it’s being weaponized to launch them. Attackers are using AI to automate and scale cyber threats, making them more sophisticated and challenging to detect. Businesses adopting AI must ensure their agreements require vendors to meet strong cybersecurity standards, continuously monitor threats, and have response plans in place.

And let’s not forget that AI is constantly evolving. Unlike traditional software that remains static between updates, AI models change as they learn. This means performance, reliability, and even the risks associated with AI can shift over time. Businesses need agreements that reflect the dynamic nature of AI, ensuring they’re not locked into contracts that don’t account for future risks.

These challenges make liability a major gray area. When an AI-driven system fails—whether through bias, a data leak, or a bad decision—who is responsible? Often, the company using the AI is held liable, even if the failure was due to the model’s design, training data, or a third-party vendor. Many contracts fail to clarify liability, leaving businesses exposed to risks they didn’t anticipate. In some cases, contracts even shift responsibility away from the AI vendor, limiting recourse for the business using the tool.

The Role of Contracts in Managing AI Risk 

While businesses can’t predict every issue AI might cause, they can protect themselves with the right contract language. Instead of relying on outdated agreements that don’t account for AI’s unique risks, companies need contract terms that address data usage, liability, security, and the evolving nature of AI.

A well-structured agreement should ensure vendors take responsibility for their AI’s performance, security, and ethical use. It should clearly define ownership of AI-generated content, limit third-party risks, and provide transparency into how the AI system operates over time. Without these safeguards, businesses run the risk of being caught in legal and financial disputes over AI failures they never saw coming. 

AI is changing the way companies operate, but it also demands a new way of thinking about contracts. Businesses that take a proactive approach — by embedding AI-specific protections into their agreements — will be in a much stronger position to benefit from AI while minimizing risk.

About the Author

Rob Scott is the CEO and Founder of Monjur, Inc., a pioneering legal technology platform that is transforming contract management and compliance for businesses. A seasoned attorney with a deep background in litigation and technology law, Rob has been recognized as Technology Attorney of the Year by Finance Monthly and named a Top Entrepreneur to Watch by USA Today. Before launching Monjur, Rob built a distinguished career as a trial attorney, representing businesses and technology companies in complex litigation. He holds an AV Preeminent Rating from Martindale-Hubbell, the highest peer rating standard, reflecting his exceptional legal expertise and ethical standards. Under Rob’s leadership, Monjur has rapidly scaled, now serving nearly 600 businesses, providing them with innovative legal solutions that streamline contract workflows and ensure compliance. Rob is also the host of Talk Tech with Rob Scott, a podcast where he explores the latest developments in legal tech, risk management, and entrepreneurship with industry leaders and innovators. A recognized thought leader, Rob regularly contributes to industry publications, sharing insights on the future of legal automation, business risk mitigation, and technology-driven legal strategies.

AIwire