


Legal AI was supposed to automate work. Instead, many teams get generic outputs, hallucinated citations, and analysis that sounds plausible but fails to reflect how your team evaluates risk on a given transaction. The limiting factor isn’t model capability. It’s architecture — specifically, the absence of a system that can encode institutional judgment and enforce defensibility.
Most legal AI products rely on retrieval-augmented generation: retrieve documents, then generate analysis. But standard RAG pipelines don’t capture who is asking or what standards they operate under. A private equity firm acquiring a healthcare platform applies different materiality thresholds than a strategic buyer in the same sector. Cross-border M&A teams rank regulatory exposure differently than domestic boutiques. These differences aren’t stylistic. They reflect operating rules — risk hierarchies, output expectations, citation discipline, and escalation logic embedded in how teams make decisions. When systems can’t encode that judgment, humans have to post-process the output, which erodes efficiency and trust.
The solution is skills architecture: not prompts, but executable behavioral contracts that bind user context to model behavior. Skills define output structure, reasoning patterns, calibrated risk thresholds, and citation standards — and they shape retrieval ranking, constrain generation, and validate results before anything is surfaced. Hyper-personalization happens across three layers:
Over time, this compounds. As more transactions move through the system, risk calibrations refine and output patterns strengthen. The system increasingly reflects how your team evaluates risk on a given transaction rather than producing generic assistance.
This is where Aracor positions itself. Aracor embeds structured verification workflows that function as institutional skills. Comparisons are reproducible. Findings remain clause-linked as documents evolve. Outputs conform to calibrated risk and citation standards. Precision is engineered into the pipeline, so speed operates within discipline, not at its expense.
Lesly Arun Franco
Chief Technology Officer
Aracor


Imagine you are an M&A associate preparing a comparison memo two hours before a signing call. The partner wants confirmation that the indemnification cap and survival periods in the final agreement reflect what was negotiated. You use an integrated AI workflow. It retrieves drafts from the matter workspace, synthesizes differences, and produces a confident summary. You circulate it. After closing, a discrepancy surfaces. The summary pulled from an earlier draft that remained in the file set, and the output was not clause-linked or clearly version-aware. Now the question is not only what changed. It is whether the process was defensible, reproducible, and secure.
DeepJudge reflects a structural shift in legal AI. Rather than operating as a standalone research portal, it can now be called from within a general model environment. In the described workflow, the model calls DeepJudge, DeepJudge performs permission-aware search and synthesis across a firm’s prior matters and internal work product, and results are passed back for further reasoning and downstream steps.
That positioning is strategically correct. General-purpose models are increasingly becoming the default interface. What differentiates serious legal and deal AI is not a wrapper around inference. It is access to institutional knowledge, scoped by permissions, with provenance and workflow discipline. DeepJudge is positioning itself at that intersection.
This is also where the central paradox appears: orchestration increases capability, but it also creates a larger attack surface.
A bounded retrieval workflow can be relatively straightforward to govern. The model asks a question, the system retrieves authorized materials, and the model summarizes. Security is largely a matter of access control, tenant isolation, and careful handling of returned content.
Once the architecture evolves into an orchestrated, multi-step workflow, the security paradigm changes. A model environment can call a retrieval tool, receive results, trigger additional tool calls, and route context between steps. We cannot assert that DeepJudge is operating as a fully autonomous, cross-system agent today based on public descriptions. What is clear is that once tools are connected in this way, multi-step chaining becomes structurally possible. As orchestration increases, so does the security surface.
This shift is not theoretical. It changes how legal AI risk must be evaluated.
Three risk categories become more acute.
First, workflow manipulation. In tool-connected systems, an attacker does not need to trick a chatbot for novelty. They can attempt to steer a workflow so that sensitive content retrieved in one step is used improperly in another, or routed beyond its intended boundary.
Second, intermediate-step integrity. When a process contains multiple steps, the final output may appear reasonable even if an intermediate stage was compromised or skewed. In transactional work, small distortions can translate into real economic consequences once documents are executed.
Third, data leakage between steps. Orchestration requires passing context forward. If the system is not strict about data minimization, matter-scoped permissions, and output controls at each boundary, sensitive information can surface later in a seemingly unrelated response. Permission-aware must therefore extend beyond search results to every handoff and every tool call.
This is where governance, not just capability, becomes the differentiator.
The governing standard in legal and finance work is defensibility. The question is not whether an answer sounds intelligent. It is whether the conclusion can be reconstructed through the record, tied directly to operative clauses, and explained under scrutiny.
As legal AI systems become more interconnected, this standard becomes harder, not easier, to meet. The more steps a workflow contains, the more important it is that each step is auditable, constrained, and traceable back to source material.
This is where Aracor fits.
Aracor is built around that standard of defensibility. The Aracor Deal Platform is one integrated system that connects the deal team to a single source of truth. As documents change, verification remains current. Every finding is traced directly to underlying source language in structured, consistent deliverables designed for review and reliance.
Security is built into the infrastructure and monitored continuously. Zero Data Retention is foundational. Execution environments are isolated. Encryption, access control, and audit logging are architectural requirements, not afterthoughts.
As legal AI becomes more connected, capability alone will not define leadership. Governance will. The systems that endure will be the ones that ensure speed never outruns accountability.


AI is delivering measurable gains in legal work. In Clio’s updated roundup of “21 research-backed ways” AI is helping lawyers, 58% of legal professionals say AI has increased the accuracy of their work, 65% say it has improved work quality, and 62% report time savings and increased efficiencies. The numbers are persuasive. The harder question is whether these gains are strengthening scrutiny or quietly diluting it.
Speed is measurable. Accountability is harder to see. In transactional work, that distinction matters because legal output is not just information. It is work product that must be relied upon, defended, and explained, often under time pressure and later under scrutiny.
Clio’s findings map cleanly onto what deal teams experience when first-pass tasks compress and bandwidth returns to higher-value judgment. Efficiency improves. Capacity expands. Teams feel more current, more responsive, and more productive. The leverage is real.
The risk is also real, and it starts when acceleration outpaces verification.
Consider a familiar scenario. You are counsel on a live deal. Drafts are moving quickly, and stakeholders are pushing to close. A side letter arrives late. A definition changes in the agreement, and the change is not obvious unless you read it against prior versions. You use AI to summarize what changed. The output reads cleanly and confidently. You forward it, and the team moves.
Then the question comes back, sometimes the same day, sometimes months later. Where is the support. Which clause. Which version. What changed. When. How did we get comfortable with this.
In that moment, speed stops being the metric. The record becomes the metric.
This is why the AI conversation in transactional practice should not revolve around drafting speed alone. It should revolve around defensibility. Legal work carries liability. It must withstand review by investment committees, boards, audit functions, and regulators. It must survive the simplest follow-up question and the one that matters most: why.
AI can produce fluent outputs that feel authoritative. The professional risk is not that attorneys stop thinking. The risk is that teams begin to accept conclusions without preserving the pathway from evidence to outcome. In transactional practice, that pathway is not optional. It is the basis for reliance.
Deals change by version. Term sheets evolve. Side letters introduce variation. Disclosure schedules and exhibits move late. Definitions shift and change obligations. Under compressed timelines, small inconsistencies slip through most easily when review is fragmented across email threads, trackers, and point-in-time summaries. If AI accelerates output but the workflow does not preserve traceability, teams can move faster while becoming less able to show their work.
Acceleration without traceability is exposure.
Traceability, in this context, is practical. It means structured outputs that can be reviewed and circulated, not just read once. It means reproducible analysis that can be re-run as documents change. It means direct links back to the specific clause language that supports each finding. It means a workflow that reduces cognitive load without obscuring how a conclusion was reached.
That is the line separating helpful acceleration from hidden risk.
This is where Aracor joins the conversation.
Aracor is built as one integrated deal platform that provides a single source of truth for the deal team. It is designed around verification over conversation. The purpose is not to generate plausible answers quickly. The purpose is to preserve accountability by keeping findings tied to the underlying language teams must actually rely upon. In practice, that means outputs designed for review and circulation, comparisons that can be repeated as drafts evolve, and a clear path back to what the documents actually say, so teams can move faster without losing the ability to defend their conclusions.
As AI becomes routine, the determining factor will not be how quickly a system drafts or summarizes. The determining factor will be whether it preserves the conditions that make professional judgment possible, including traceability, reproducibility, and a defensible record.
Speed matters. In transactions, accountability matters more.


In 2012, Caterpillar completed its acquisition of ERA Mining Machinery Ltd., a Hong Kong–listed company whose primary operating subsidiary was Zhengzhou Siwei Mechanical & Electrical Manufacturing Co., Ltd., known as Siwei. The acquisition was strategic. China was and remains the largest producer and consumer of coal in the world, and Siwei was positioned as a foothold in that market. Within months, Caterpillar announced that an internal investigation had uncovered deliberate, multi-year, coordinated accounting misconduct at Siwei, resulting in a significant goodwill impairment and a subsequent settlement, as described in Caterpillar’s official press release.
The press release is factual and restrained. It describes a dispute that was resolved, obligations that were reduced, and claims that were released. On paper, the matter was closed.
From a dealmaking perspective, the lasting value of the case is not the settlement number. It is the structural lesson: verification can fail even in sophisticated transactions when the process relies on snapshots rather than continuous alignment with the evolving deal record.
The headline issue in the Siwei case was accounting misconduct in a subsidiary. That is a control failure. But the deeper vulnerability is more general and far more common.
Cross-border acquisitions create layered information environments. Operational data sits with local teams. Reporting conventions vary. Documentation may be staged and translated. Advisors and internal teams work in parallel. Drafts circulate quickly and often in multiple versions.
Even when diligence is serious, it is usually episodic. A set of materials is reviewed, findings are summarized, and the deal moves forward while documents and assumptions continue to change. The record evolves, but the verification of the record often does not evolve with it.
That gap is where drift enters. Drift between what was understood and what is actually in the final documentation. Drift between what was represented and what can be substantiated. Drift between the negotiated position and what survives execution.
This is one of the most common causes of verification failures in deals. The process checks a moment in time, while the transaction continues to move.
It is tempting to treat Siwei as an old case from a different era. The mechanics have not changed. If anything, the pressure is greater.
Deal timelines are tighter. Volume is higher. Document sets are larger. Teams increasingly use AI tools to move faster through review and drafting. Speed is useful, but it magnifies the cost of a weak verification structure.
In most transactions, the same patterns recur:
The result is not always fraud. More often, it is quiet misalignment that survives until after signing, when the cost of fixing it rises sharply. That cost may show up as an impairment, a post-close dispute, a governance issue, or an audit and compliance problem. The exact outcome varies. The root cause is consistent: the verification process was not continuously connected to the evolving record.
For legal and finance professionals, the risk is not only that something is missed. It is that conclusions become hard to defend through the record.
A deal file can look complete while still failing the practical test of defensibility. If a deviation cannot be reproduced cleanly. If a finding cannot be traced directly to source language. If an internal conclusion depends on a stale summary while the operative text has changed. In those cases, reliance becomes harder to support, even when the team acted diligently.
Professional responsibility and investment discipline both require more than insight. They require reconstructability.
This is where M&A diligence risk becomes less about spotting a single issue and more about whether the work product can stand up to scrutiny after the fact.
The solution is not to demand perfect diligence. No system eliminates misconduct or removes the need for judgment.
The solution is to reduce drift by making verification continuous and evidence-linked.
In practical terms, that means:
This is not more work. It is better infrastructure for the work that already exists.
Aracor is built as an integrated dealmaking platform designed to protect clients, users, and the organizations that rely on the work product. It keeps verification current as documents evolve and produces structured, clause-linked outputs so deal teams can see what changed, where it changed, and why it matters.
The objective is straightforward: reduce the likelihood of deal failure by maintaining a single source of truth throughout the transaction process, with a consistent evidence trail that supports review, escalation, and defensible reliance.


Aracor CEO and Co-Founder Katya Fisher joined Fritz Spencer on EisnerAmper’s TechTalk to discuss how Aracor is reshaping the future of M&A, investment, and due diligence.
Katya explains how the Aracor AI Dealmaking Platform brings order, speed, and clarity to complex transactions. She highlights how Aracor helps eliminate institutional amnesia so deal teams retain the intelligence behind every decision, clause, and negotiation.
The conversation also covers one of the most important topics in modern diligence: security and compliance in AI. Katya outlines how Aracor’s secure architecture, Zero Data Retention, and enterprise-grade controls provide the power of AI without compromising trust.
For professionals in M&A, private equity, venture, or corporate legal, this is a clear look at how technology is elevating the business of deals.
Dont miss it!
We’re pleased to share that Aracor has been recognized by the Legal Insider Awards 2025 in the category “Most Innovative AI-Native Dealmaking Platform – Miami 2025.”
This award is an important milestone for our team. We’ve been focused on building a tool that makes the deal process clearer and easier for lawyers, investors, and companies. Aracor helps teams review documents, prepare deal materials, and manage the full process from start to finish — all in one place.
We’re grateful to Legal Insider for this recognition, and to our customers and partners for their trust and feedback. It motivates us to keep improving the product.
Read the official announcement on Legal Insider


The Artificial Intelligence Act (Regulation (EU) 2024/1689) establishes a harmonized framework for the governance of artificial intelligence within the European Union. Adopted on 12 July 2025, the EU AI Act has become the foundation of every serious discussion on responsible AI.
What began as a legislative abstraction now defines how organizations must design, procure, and deploy intelligent systems.
The Act captured attention because it set out the what: which systems are prohibited, which are deemed high-risk, and which obligations apply to both providers and purchasers. That clarity mattered then; it matters still more now, as compliance reshapes contracts, procurement, and reputation across industries.
The greater challenge is the how. Knowing the rules is one thing; proving that systems are secure, transparent, and accountable is another.
It is in this respect that Aracor distinguishes itself.
Aracor has built compliance and security into the structure of its platform. Zero Data Retention (ZDR) ensures that no client information is retained beyond immediate use, removing one of the principal risks of AI workflows. Multifactor authentication prevents unauthorized access, while comprehensive audit records create a verified account of every action, ensuring transparency and traceability.
The company holds certification under ISO 27001 and SOC 2, providing independent assurance that its information-security and governance controls meet the highest international standards. GDPR compliance aligns operations with the world’s most rigorous privacy regime, while independent penetration testing places all systems under continual scrutiny, with any findings remediated on defined timetables.
At its foundation, Aracor operates within a closed private environment, employing secured language models so that sensitive legal and financial data remain entirely under client control.
In venture capital, private equity, and mergers and acquisitions, documents are not merely records; they constitute the transactions themselves. A single breach of security may delay negotiations, erode trust, and compromise value. Compliance, therefore, is not a procedural exercise but an expression of integrity and discipline.
The EU AI Act defines the what. Aracor delivers the how, enabling organizations to move quickly, remain compliant, and protect what matters most.


OpenAI's launch of AgentKit last week reignited debate across the AI industry about whether visual, node-based workflow builders represent the future of agent development or a pattern we should be moving beyond. In legal tech, where similar tools already exist, it raises a sharper question: is this really the interface lawyers need?
The familiar pattern
AgentKit’s Agent Builder uses a drag-and-drop interface with arrows and branching paths. It looks familiar to anyone who has used tools like Zapier or n8n. While OpenAI’s release targets developers, legal tech companies have been experimenting with comparable systems that let firms map processes visually, sometimes even through natural language.
The appeal is clear. Legal work is structured and repeatable, which makes it seem ideal for visual workflows. Yet we must ask whether this model truly serves lawyers.
The maintenance problem
Visual workflow tools often collapse under real-world complexity. The challenge lies not in the interface but in the engineering behind it: handling exceptions, managing state, fixing errors, and updating as needs evolve.
Even strong legal operations teams struggle here. There is a big difference between “this process can be mapped visually” and “lawyers should maintain these diagrams.” Making a workflow visual does not remove complexity; it shifts it to the person responsible for keeping it running.
Large firms can dedicate people to maintain such systems. Smaller teams cannot. For them, the workflow builder becomes a burden. Technically minded lawyers who enjoy automation exist, but they are rare. Designing for them risks leaving most lawyers behind.
What should we build instead?
Progress will not come from giving every lawyer a node editor. It will come from designing interfaces that mirror how lawyers actually work. Imagine checklist-style flows that focus on outcomes, where the system manages logic and routing automatically. No diagrams to decipher, no branching paths to debug.
At Aracor, we are building toward that model: agentic systems that manage the process, not just the document. Instead of asking lawyers to design or maintain workflows, Aracor’s platform continuously verifies deal documents, detects changes automatically, and updates analyses in real time. Intelligence lives inside the workflow, keeping diligence, negotiation, and compliance aligned across the deal cycle. It replaces static reviews with living processes, giving legal and deal teams clarity without added maintenance.
Lawyers still need transparency. They cannot operate inside black boxes. But transparency does not mean showing every wire and node. It means revealing what matters: what is happening, why it matters, and what effect it has.
The next wave
The future will not belong to visual builders but to systems that think and act within defined boundaries. Aracor’s agentic workflows already point there: executing deal processes end to end, monitoring updates, and maintaining analysis continuously. These agents focus on execution, not configuration, allowing lawyers to stay anchored in judgment while the system keeps every moving part in sync.


In artificial intelligence, data is not a byproduct of innovation. It is the source of it. Every model’s capacity to reason, predict, and decide depends on the quality and integrity of the information it learns from. Systems trained on synthetic or generalized text can simulate intelligence, but they cannot replicate judgment. True understanding emerges only when an AI is shaped by real-world complexity, the authentic records, negotiations, and outcomes that define human decision-making.
Aracor is built with this principle at its core. In alignment with the Constructor ecosystem, its architecture is designed to integrate verified, domain-specific data environments while maintaining the highest standards of security and compliance. Each analysis strengthens the next through a positive learning feedback loop, a continual process in which verified outcomes refine future performance. This design ensures that Aracor’s intelligence remains grounded, precise, and capable of accelerating improvement without compromising trust.
Where conventional models reach their limits, Aracor is built to keep learning. It transforms static information into living intelligence, recognizing nuance, anticipating risk, and strengthening with every verified outcome. In the future of AI, data remains not just the raw material of progress. It is its sovereign force.