


In M&A today, a simple belief is gaining momentum. Better data will produce better deals.
Much of that belief is correct. For years, much of the market relied on public information, broad market signals, regulatory developments, industry reporting, and the instincts of experienced advisors. That world is changing. The focus is shifting toward proprietary deal data, past transactions, post-closing claims, and patterns embedded in negotiated outcomes. Firms with access to those records will have a real advantage. Better data will improve AI significantly. Better data will improve preparation. It will sharpen how advisors anticipate disputes, structure diligence, and prepare for negotiations.
That part is already happening.
But the current conversation often makes a larger leap.
If enough data exists, many assume the rest can be automated.
That is where the thinking begins to weaken.
Even proprietary data still has to be understood in context. And context does not arise from the dataset itself. It comes from the particulars of the deal and from the people negotiating it.
A deal is not just a pattern. It is not a statistical artifact. It is a live negotiation between parties with different incentives, different pressures, different fears, different deadlines, and different thresholds for compromise. Past transactions can help us study the record of prior decision-making. They do not fully explain why those decisions were made. They do not reliably capture motive, leverage, fatigue, internal politics, timing pressure, or the fact that one side may concede on a smaller point to secure a larger objective somewhere else.
Two clauses can appear nearly identical and still mean very different things in practice. One concession may reflect carelessness. Another may be highly strategic. A party may yield on a minor issue not because the provision is unimportant, but because the real negotiation is happening elsewhere. The language is evidence. It is not the full explanation.
Machines can detect patterns in text. They can compare provisions, benchmark clauses, identify similarity across agreements, and read the record of what people have done before. What they cannot reliably do is reconstruct the full context that gave those patterns meaning in the first place. They can identify structure. They do not necessarily understand significance.
Once markets begin relying on the same datasets, the same benchmarks, and the same definitions of what is normal, dealmaking starts to converge around shared assumptions. That is usually described as efficiency. It can also become fragility.
Nassim Nicholas Taleb has written persuasively about systemic risk. Systems built around the same assumptions often appear stable right up until they are not. They become vulnerable to contagion because the same underlying logic is repeated across the system. The same principle applies here. If enough transactions are shaped by the same models of acceptable risk allocation, the market does not become safer. It becomes more synchronized. And synchronized systems do not fail one at a time. They fail in clusters.
This is one of the hidden risks of over-automation in dealmaking. When too many decisions are guided by the same data structures, the same benchmarks, and the same automated interpretations of what is market, the system may look more orderly while quietly becoming more brittle. Uniformity feels efficient. It is not always resilient.
In a manual environment, an error can remain local. A lawyer misreads a clause. A team overlooks an inconsistency. A mistaken interpretation may affect one draft, one process, one deal. In a scaled system, a small mistake does not stay small. It spreads. A flawed assumption, a bad rule, or an incomplete interpretation can propagate quietly across transactions until the mistake becomes embedded in the infrastructure itself.
At that point, the error is no longer incidental. It becomes systemic.
A system that merely repeats patterns is not antifragile. It does not learn from error. It distributes error. If humans are not meaningfully part of the process, the system does not correct itself through judgment. It simply scales whatever logic it was given, including its flaws.
None of this is an argument against proprietary data. On the contrary, proprietary data and document sets are valuable. Firms that use them well will outperform those that do not. The warning is narrower and more serious than that. Data, however sophisticated, is not a substitute for judgment. Pattern recognition is not understanding. Benchmarking is not reasoning. And automation is not the same thing as scrutiny.
The best use of technology in dealmaking is therefore more disciplined than the current rhetoric sometimes suggests.
The strongest systems do not pretend to automate judgment itself. They do something narrower and more valuable. They preserve the integrity of the deal record as the transaction changes. They compare drafts. They track revisions. They ensure that findings remain tied to source language. They make it harder for conclusions to drift out of date while documents continue moving underneath them.
That is the real problem in live deals.
Not the absence of data. The decay of accuracy.
During an active transaction, documents change constantly. Drafts circulate. Schedules evolve. Definitions tighten. Side letters introduce new obligations. A conclusion that was correct yesterday may no longer be correct tomorrow. This is where deal teams remain exposed. Not because they lack historical intelligence, but because the factual basis of the diligence record keeps moving.
At Aracor, that is the problem we are focused on solving. Our approach reflects a simple view of how deal work functions under pressure. Apply data where it sharpens preparation. Deploy technology where it strengthens verification. Keep judgment where it belongs, with the professionals responsible for the deal. The future of deal technology is not blind automation. It is disciplined infrastructure built for scrutiny, change, and real-world decisions.


Legal AI was supposed to automate work. Instead, many teams get generic outputs, hallucinated citations, and analysis that sounds plausible but fails to reflect how your team evaluates risk on a given transaction. The limiting factor isn’t model capability. It’s architecture — specifically, the absence of a system that can encode institutional judgment and enforce defensibility.
Most legal AI products rely on retrieval-augmented generation: retrieve documents, then generate analysis. But standard RAG pipelines don’t capture who is asking or what standards they operate under. A private equity firm acquiring a healthcare platform applies different materiality thresholds than a strategic buyer in the same sector. Cross-border M&A teams rank regulatory exposure differently than domestic boutiques. These differences aren’t stylistic. They reflect operating rules — risk hierarchies, output expectations, citation discipline, and escalation logic embedded in how teams make decisions. When systems can’t encode that judgment, humans have to post-process the output, which erodes efficiency and trust.
The solution is skills architecture: not prompts, but executable behavioral contracts that bind user context to model behavior. Skills define output structure, reasoning patterns, calibrated risk thresholds, and citation standards — and they shape retrieval ranking, constrain generation, and validate results before anything is surfaced. Hyper-personalization happens across three layers:
Over time, this compounds. As more transactions move through the system, risk calibrations refine and output patterns strengthen. The system increasingly reflects how your team evaluates risk on a given transaction rather than producing generic assistance.
This is where Aracor positions itself. Aracor embeds structured verification workflows that function as institutional skills. Comparisons are reproducible. Findings remain clause-linked as documents evolve. Outputs conform to calibrated risk and citation standards. Precision is engineered into the pipeline, so speed operates within discipline, not at its expense.
Lesly Arun Franco
Chief Technology Officer
Aracor


Imagine you are an M&A associate preparing a comparison memo two hours before a signing call. The partner wants confirmation that the indemnification cap and survival periods in the final agreement reflect what was negotiated. You use an integrated AI workflow. It retrieves drafts from the matter workspace, synthesizes differences, and produces a confident summary. You circulate it. After closing, a discrepancy surfaces. The summary pulled from an earlier draft that remained in the file set, and the output was not clause-linked or clearly version-aware. Now the question is not only what changed. It is whether the process was defensible, reproducible, and secure.
DeepJudge reflects a structural shift in legal AI. Rather than operating as a standalone research portal, it can now be called from within a general model environment. In the described workflow, the model calls DeepJudge, DeepJudge performs permission-aware search and synthesis across a firm’s prior matters and internal work product, and results are passed back for further reasoning and downstream steps. Read more.
That positioning is strategically correct. General-purpose models are increasingly becoming the default interface. What differentiates serious legal and deal AI is not a wrapper around inference. It is access to institutional knowledge, scoped by permissions, with provenance and workflow discipline. DeepJudge is positioning itself at that intersection.
This is also where the central paradox appears: orchestration increases capability, but it also creates a larger attack surface.
A bounded retrieval workflow can be relatively straightforward to govern. The model asks a question, the system retrieves authorized materials, and the model summarizes. Security is largely a matter of access control, tenant isolation, and careful handling of returned content.
Once the architecture evolves into an orchestrated, multi-step workflow, the security paradigm changes. A model environment can call a retrieval tool, receive results, trigger additional tool calls, and route context between steps. We cannot assert that DeepJudge is operating as a fully autonomous, cross-system agent today based on public descriptions. What is clear is that once tools are connected in this way, multi-step chaining becomes structurally possible. As orchestration increases, so does the security surface.
This shift is not theoretical. It changes how legal AI risk must be evaluated.
Three risk categories become more acute.
First, workflow manipulation. In tool-connected systems, an attacker does not need to trick a chatbot for novelty. They can attempt to steer a workflow so that sensitive content retrieved in one step is used improperly in another, or routed beyond its intended boundary.
Second, intermediate-step integrity. When a process contains multiple steps, the final output may appear reasonable even if an intermediate stage was compromised or skewed. In transactional work, small distortions can translate into real economic consequences once documents are executed.
Third, data leakage between steps. Orchestration requires passing context forward. If the system is not strict about data minimization, matter-scoped permissions, and output controls at each boundary, sensitive information can surface later in a seemingly unrelated response. Permission-aware must therefore extend beyond search results to every handoff and every tool call.
This is where governance, not just capability, becomes the differentiator.
The governing standard in legal and finance work is defensibility. The question is not whether an answer sounds intelligent. It is whether the conclusion can be reconstructed through the record, tied directly to operative clauses, and explained under scrutiny.
As legal AI systems become more interconnected, this standard becomes harder, not easier, to meet. The more steps a workflow contains, the more important it is that each step is auditable, constrained, and traceable back to source material.
This is where Aracor fits.
Aracor is built around that standard of defensibility. The Aracor Deal Platform is one integrated system that connects the deal team to a single source of truth. As documents change, verification remains current. Every finding is traced directly to underlying source language in structured, consistent deliverables designed for review and reliance.
Security is built into the infrastructure and monitored continuously. Zero Data Retention is foundational. Execution environments are isolated. Encryption, access control, and audit logging are architectural requirements, not afterthoughts.
As legal AI becomes more connected, capability alone will not define leadership. Governance will. The systems that endure will be the ones that ensure speed never outruns accountability.


AI is delivering measurable gains in legal work. In Clio’s updated roundup of “21 research-backed ways” AI is helping lawyers, 58% of legal professionals say AI has increased the accuracy of their work, 65% say it has improved work quality, and 62% report time savings and increased efficiencies. The numbers are persuasive. The harder question is whether these gains are strengthening scrutiny or quietly diluting it.
Speed is measurable. Accountability is harder to see. In transactional work, that distinction matters because legal output is not just information. It is work product that must be relied upon, defended, and explained, often under time pressure and later under scrutiny.
Clio’s findings map cleanly onto what deal teams experience when first-pass tasks compress and bandwidth returns to higher-value judgment. Efficiency improves. Capacity expands. Teams feel more current, more responsive, and more productive. The leverage is real.
The risk is also real, and it starts when acceleration outpaces verification.
Consider a familiar scenario. You are counsel on a live deal. Drafts are moving quickly, and stakeholders are pushing to close. A side letter arrives late. A definition changes in the agreement, and the change is not obvious unless you read it against prior versions. You use AI to summarize what changed. The output reads cleanly and confidently. You forward it, and the team moves.
Then the question comes back, sometimes the same day, sometimes months later. Where is the support. Which clause. Which version. What changed. When. How did we get comfortable with this.
In that moment, speed stops being the metric. The record becomes the metric.
This is why the AI conversation in transactional practice should not revolve around drafting speed alone. It should revolve around defensibility. Legal work carries liability. It must withstand review by investment committees, boards, audit functions, and regulators. It must survive the simplest follow-up question and the one that matters most: why.
AI can produce fluent outputs that feel authoritative. The professional risk is not that attorneys stop thinking. The risk is that teams begin to accept conclusions without preserving the pathway from evidence to outcome. In transactional practice, that pathway is not optional. It is the basis for reliance.
Deals change by version. Term sheets evolve. Side letters introduce variation. Disclosure schedules and exhibits move late. Definitions shift and change obligations. Under compressed timelines, small inconsistencies slip through most easily when review is fragmented across email threads, trackers, and point-in-time summaries. If AI accelerates output but the workflow does not preserve traceability, teams can move faster while becoming less able to show their work.
Acceleration without traceability is exposure.
Traceability, in this context, is practical. It means structured outputs that can be reviewed and circulated, not just read once. It means reproducible analysis that can be re-run as documents change. It means direct links back to the specific clause language that supports each finding. It means a workflow that reduces cognitive load without obscuring how a conclusion was reached.
That is the line separating helpful acceleration from hidden risk.
This is where Aracor joins the conversation.
Aracor is built as one integrated deal platform that provides a single source of truth for the deal team. It is designed around verification over conversation. The purpose is not to generate plausible answers quickly. The purpose is to preserve accountability by keeping findings tied to the underlying language teams must actually rely upon. In practice, that means outputs designed for review and circulation, comparisons that can be repeated as drafts evolve, and a clear path back to what the documents actually say, so teams can move faster without losing the ability to defend their conclusions.
As AI becomes routine, the determining factor will not be how quickly a system drafts or summarizes. The determining factor will be whether it preserves the conditions that make professional judgment possible, including traceability, reproducibility, and a defensible record.
Speed matters. In transactions, accountability matters more.


In 2012, Caterpillar completed its acquisition of ERA Mining Machinery Ltd., a Hong Kong–listed company whose primary operating subsidiary was Zhengzhou Siwei Mechanical & Electrical Manufacturing Co., Ltd., known as Siwei. The acquisition was strategic. China was and remains the largest producer and consumer of coal in the world, and Siwei was positioned as a foothold in that market. Within months, Caterpillar announced that an internal investigation had uncovered deliberate, multi-year, coordinated accounting misconduct at Siwei, resulting in a significant goodwill impairment and a subsequent settlement, as described in Caterpillar’s official press release.
The press release is factual and restrained. It describes a dispute that was resolved, obligations that were reduced, and claims that were released. On paper, the matter was closed.
From a dealmaking perspective, the lasting value of the case is not the settlement number. It is the structural lesson: verification can fail even in sophisticated transactions when the process relies on snapshots rather than continuous alignment with the evolving deal record.
The headline issue in the Siwei case was accounting misconduct in a subsidiary. That is a control failure. But the deeper vulnerability is more general and far more common.
Cross-border acquisitions create layered information environments. Operational data sits with local teams. Reporting conventions vary. Documentation may be staged and translated. Advisors and internal teams work in parallel. Drafts circulate quickly and often in multiple versions.
Even when diligence is serious, it is usually episodic. A set of materials is reviewed, findings are summarized, and the deal moves forward while documents and assumptions continue to change. The record evolves, but the verification of the record often does not evolve with it.
That gap is where drift enters. Drift between what was understood and what is actually in the final documentation. Drift between what was represented and what can be substantiated. Drift between the negotiated position and what survives execution.
This is one of the most common causes of verification failures in deals. The process checks a moment in time, while the transaction continues to move.
It is tempting to treat Siwei as an old case from a different era. The mechanics have not changed. If anything, the pressure is greater.
Deal timelines are tighter. Volume is higher. Document sets are larger. Teams increasingly use AI tools to move faster through review and drafting. Speed is useful, but it magnifies the cost of a weak verification structure.
In most transactions, the same patterns recur:
The result is not always fraud. More often, it is quiet misalignment that survives until after signing, when the cost of fixing it rises sharply. That cost may show up as an impairment, a post-close dispute, a governance issue, or an audit and compliance problem. The exact outcome varies. The root cause is consistent: the verification process was not continuously connected to the evolving record.
For legal and finance professionals, the risk is not only that something is missed. It is that conclusions become hard to defend through the record.
A deal file can look complete while still failing the practical test of defensibility. If a deviation cannot be reproduced cleanly. If a finding cannot be traced directly to source language. If an internal conclusion depends on a stale summary while the operative text has changed. In those cases, reliance becomes harder to support, even when the team acted diligently.
Professional responsibility and investment discipline both require more than insight. They require reconstructability.
This is where M&A diligence risk becomes less about spotting a single issue and more about whether the work product can stand up to scrutiny after the fact.
The solution is not to demand perfect diligence. No system eliminates misconduct or removes the need for judgment.
The solution is to reduce drift by making verification continuous and evidence-linked.
In practical terms, that means:
This is not more work. It is better infrastructure for the work that already exists.
Aracor is built as an integrated dealmaking platform designed to protect clients, users, and the organizations that rely on the work product. It keeps verification current as documents evolve and produces structured, clause-linked outputs so deal teams can see what changed, where it changed, and why it matters.
The objective is straightforward: reduce the likelihood of deal failure by maintaining a single source of truth throughout the transaction process, with a consistent evidence trail that supports review, escalation, and defensible reliance.


Aracor CEO and Co-Founder Katya Fisher joined Fritz Spencer on EisnerAmper’s TechTalk to discuss how Aracor is reshaping the future of M&A, investment, and due diligence.
Katya explains how the Aracor AI Dealmaking Platform brings order, speed, and clarity to complex transactions. She highlights how Aracor helps eliminate institutional amnesia so deal teams retain the intelligence behind every decision, clause, and negotiation.
The conversation also covers one of the most important topics in modern diligence: security and compliance in AI. Katya outlines how Aracor’s secure architecture, Zero Data Retention, and enterprise-grade controls provide the power of AI without compromising trust.
For professionals in M&A, private equity, venture, or corporate legal, this is a clear look at how technology is elevating the business of deals.
Dont miss it!
We’re pleased to share that Aracor has been recognized by the Legal Insider Awards 2025 in the category “Most Innovative AI-Native Dealmaking Platform – Miami 2025.”
This award is an important milestone for our team. We’ve been focused on building a tool that makes the deal process clearer and easier for lawyers, investors, and companies. Aracor helps teams review documents, prepare deal materials, and manage the full process from start to finish — all in one place.
We’re grateful to Legal Insider for this recognition, and to our customers and partners for their trust and feedback. It motivates us to keep improving the product.
Read the official announcement on Legal Insider


The Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes a harmonized framework for the governance of artificial intelligence within the European Union. Adopted on 13 June 2025 and published in the Official Journal on 12 July 2025, the EU AI Act has become the foundation of every serious discussion on responsible AI.
What began as a legislative abstraction now defines how organizations must design, procure, and deploy intelligent systems.
The Act captured attention because it set out the what: which systems are prohibited, which are deemed high-risk, and which obligations apply to providers, deployers, importers, and distributors. That clarity mattered then; it matters still more now, as compliance reshapes contracts, procurement, and reputation across industries.
The greater challenge is the how. Knowing the rules is one thing; proving that systems are secure, transparent, and accountable is another.
It is in this respect that Aracor distinguishes itself.
For Aracor, as for many AI tools used in deal workflows that may fall into the Act’s minimal-risk category, the immediate issue is not the heaviest layer of direct regulation under the EU AI Act. It is whether the system can satisfy rising expectations around security, traceability, and control as AI governance continues to shape market standards and future regulation. The Act is risk-based, with the strictest obligations attaching to higher-risk uses rather than minimal-risk systems.
Aracor has built security and governance into the structure of its platform. Zero Data Retention (ZDR) ensures that client information is not retained beyond immediate use, reducing one of the principal risks of AI workflows. Multifactor authentication helps prevent unauthorized access, while comprehensive audit records create a verified account of system activity, ensuring transparency and traceability.
The company holds ISO 27001 certification and SOC 2 attestation, providing independent assurance around its information security controls. GDPR alignment supports operation under one of the world’s most rigorous privacy regimes, while independent penetration testing places systems under continual scrutiny, with findings remediated on defined timetables.
At its foundation, Aracor operates within a closed private environment, employing secured language models so that sensitive legal and financial data remain under client control.
In venture capital, private equity, and mergers and acquisitions, documents are not merely records; they constitute the transactions themselves. A single security failure may delay negotiations, erode trust, and compromise value. Compliance, therefore, is not a procedural exercise but an expression of integrity and discipline.
The EU AI Act defines the what. Aracor addresses the how, giving organizations a more secure, traceable, and controlled way to use AI in high-consequence workflows.


OpenAI's launch of AgentKit last week reignited debate across the AI industry about whether visual, node-based workflow builders represent the future of agent development or a pattern we should be moving beyond. In legal tech, where similar tools already exist, it raises a sharper question: is this really the interface lawyers need?
The familiar pattern
AgentKit’s Agent Builder uses a drag-and-drop interface with arrows and branching paths. It looks familiar to anyone who has used tools like Zapier or n8n. While OpenAI’s release targets developers, legal tech companies have been experimenting with comparable systems that let firms map processes visually, sometimes even through natural language.
The appeal is clear. Legal work is structured and repeatable, which makes it seem ideal for visual workflows. Yet we must ask whether this model truly serves lawyers.
The maintenance problem
Visual workflow tools often collapse under real-world complexity. The challenge lies not in the interface but in the engineering behind it: handling exceptions, managing state, fixing errors, and updating as needs evolve.
Even strong legal operations teams struggle here. There is a big difference between “this process can be mapped visually” and “lawyers should maintain these diagrams.” Making a workflow visual does not remove complexity; it shifts it to the person responsible for keeping it running.
Large firms can dedicate people to maintain such systems. Smaller teams cannot. For them, the workflow builder becomes a burden. Technically minded lawyers who enjoy automation exist, but they are rare. Designing for them risks leaving most lawyers behind.
What should we build instead?
Progress will not come from giving every lawyer a node editor. It will come from designing interfaces that mirror how lawyers actually work. Imagine checklist-style flows that focus on outcomes, where the system manages logic and routing automatically. No diagrams to decipher, no branching paths to debug.
At Aracor, we are building toward that model: agentic systems that manage the process, not just the document. Instead of asking lawyers to design or maintain workflows, Aracor’s platform continuously verifies deal documents, detects changes automatically, and updates analyses in real time. Intelligence lives inside the workflow, keeping diligence, negotiation, and compliance aligned across the deal cycle. It replaces static reviews with living processes, giving legal and deal teams clarity without added maintenance.
Lawyers still need transparency. They cannot operate inside black boxes. But transparency does not mean showing every wire and node. It means revealing what matters: what is happening, why it matters, and what effect it has.
The next wave
The future will not belong to visual builders but to systems that think and act within defined boundaries. Aracor’s agentic workflows already point there: executing deal processes end to end, monitoring updates, and maintaining analysis continuously. These agents focus on execution, not configuration, allowing lawyers to stay anchored in judgment while the system keeps every moving part in sync.