The Inconvenient Truth about CLM
One of the tasks that befalls a CTO is the endless process of keeping current with new market trends, changes in applications, new technology, and reviewing where systems or technologies have failed, to avoid making the same mistakes. To effectively do this you must keep track of what market analysts like Gartner, IDC, 451 Research and others are discussing, and also what your competition or others within your technology arena are positioning or advocating.
When starting Seal, my co-founder, Ulf Zetterberg and I started out with a common vision, to fundamentally change how contracts are managed. We knew of the struggles and failures with CLM, and the issues we saw eight years ago have not gone away — they have only grown. Most CLM systems were developed and deployed with the mindset that if we templatize or standardize our contracts base, and how we interact with external parties, we will be able to track, manage, and report on all contractual terms. This did not work then, and it does not work today.
With this in mind, some events over the last few weeks involving three different streams of technical opinion and discussion have caught my attention. First, we have the old view that having a CLM system with a different “universal” schema will enable CLM to succeed. This has already been proven to be ineffective. Second, we have the new “Smart Contracts” view, with the idea that placing templates and workflow within a blockchain will fundamentally remove all the issues present within the existing CLM systems using a “universal” schema.
And finally, we have the “inconvenient truth” about CLM, which is, in its current format, it will always fall short. This is the view that Ulf and I held when starting Seal, and it is what drove us to create the Seal platform, and the Artificial Intelligence at its heart.
The reason why CLM generally fails is it only deals well with one side of the equation—your own contracts. As noted by the recent videos from one company, Pramata, CLM could fall short on over 60% of the agreements you might have within the system, and those agreements are, in many cases, the most valuable in terms of revenue and the largest in terms of company size. The reason is a large percentage of companies you could be selling to will be larger, or command a stronger negotiating position, and thus you will be contracting on third-party paper that your CLM system is not configured to deal with. Some companies have recognised the importance of this shift, and in fact, Seal works extensively with Ariba to provide analytics and information extraction to augment the system when third-party paper is used.
When looking at how the current, and next generations will interact with systems, it’s clear that the momentum is towards systems that learn and predict, because manually typing into a rigid system database, “universal schema,” or otherwise, is just not going to generate the results needed. We are so used to “tagging” pictures within Facebook or other systems, and then after a short time expect those systems to auto tag friends for us. Why should enterprise CLM systems be any different?
Many companies have seen the path Seal has forged. When we started with our first enterprise customer, as described in our CFO Mark Williams’ recent blog post, we had no competition. Our NLP and AI vision was unique. How times change! Now we see many competitors claiming to be “Seal-like” and claiming to have AI in some form. However, as I previously noted in a blog I wrote recently, not all AI is equal. And, this is very clear within the CLM market, with players looking to adjust the systems to cope with the changing requirements and CLM’s failure to truly deliver.
In an ever-changing environment, companies rebrand and attempt to adapt old technology to meet the new world, acquiring vendors that have the allure of “new technology,” or by adding the current market buzzwords. The truth is when start-ups sell out, it’s usually because they cannot gain traction on their own.
This is currently abundantly true within the AI and CLM space. Information extraction is not new—Zonal OCR and clustering have been used within ECM and CLM systems for many years. However, it’s being rebranded now as AI by companies jostling to find their space within the new information management age. In fact, Asgard, an AI VC, has recently suggested that only 60% of more than 400 European AI start-ups who clearly claim to be AI firms, actually are.
Before anyone should believe the hype from any vendor, they should simply ask the following questions: Does the system rely on clustering of documents based on templates to know what extractions to apply? Does the system perform Zonal, or as some might call it “hot-spot” information extraction? Does the system need to have a set of similar documents to be able to cluster them, and it does not perform well when the document formats are more random? Any system that uses Zonal, or requires similar documents for the extraction to work is not an AI solution, it’s just a rebadged OCR Zonal and cluster system with rules or regular expression, which, as I mentioned, has been used within ECM digital mailrooms for many years. These systems cannot make predictions on unseen information nor cope with documents not within a set format. So, much like the CLM templates, these systems are only good for the standard, and generally less-valuable contracts.
A critical part of any CLM, or even contracts intelligence solution, is not just the extraction accuracy, it’s the speed and agility at which a business user can gain insight. One of the reasons current CLM solutions fail to provide full value for third-party paper is that they are unable to quickly adapt to changing reporting or information extraction requirements. Ask any vendor how long after a Monday morning data breach it would take to process and locate all the contracts containing a set clause not previously captured, and within that clause, a specific data point such as the agreed upon notification period to affected vendors that must be met, and have that sorted in the order of shortest time to longest. This this might appear simple, it involves both AI and NLP combined, as information needs to be detected in random formats (ML) with varying degrees of changes (ML), and it requires information normalization (NLP). And all this is required to be performed within less than 24 hours.
So, if you are considering a CLM system, just ask yourself and your vendors a very important question — will this system allow me to find critical information myself, and within the time I have allotted? And then ask them to prove it, live.
You will see how few vendors can achieve this, and how the issue of third-party paper is one they really cannot overcome. This “inconvenient truth” of CLM is why Seal is brought in to accounts very often to work with CLM, and create a system that manages contracts, but more importantly, all the data they contain, no matter whose paper they are on.