Aller au contenu

Beyond the Hype: AI Forces Arbitration to Face Its Own Future

At the recent CEPANI lunch debate on 19 March 2026, Prof. Dr. Maud Piers delivered what may be one of the most grounded assessments to date of artificial intelligence in arbitration. Far from the breathless enthusiasm often surrounding legal tech, her keynote dissected “the good, the bad, and the ugly” of AI with a clarity that felt both refreshing and urgently necessary. Arbitration, she argued, sits at a crossroads: structurally predisposed to embrace efficiency, yet burdened by heightened risks, ethical duties, and deeply human expectations of justice.

Beatrice Van Tornout
Counsel, Fieldfisher LLP.
Beatrice Van Tornout
Counsel, Fieldfisher LLP

At the recent CEPANI lunch debate on 19 March 2026, Prof. Dr. Maud Piers delivered what may be one of the most grounded assessments to date of artificial intelligence in arbitration. Far from the breathless enthusiasm often surrounding legal tech, her keynote dissected “the good, the bad, and the ugly” of AI with a clarity that felt both refreshing and urgently necessary. Arbitration, she argued, sits at a crossroads: structurally predisposed to embrace efficiency, yet burdened by heightened risks, ethical duties, and deeply human expectations of justice.

The Promise: Efficiency in a System Built for Complexity

International commercial arbitration is notoriously complex: sprawling factual records, multilayered regulatory environments, multilingual evidence, and culturally diverse actors. In such a setting, tools that promise speed, cross‑lingual capability, and operational efficiency naturally attract keen interest.

Prof. Piers noted that counsel are already embracing AI—often enthusiastically. Drafting aids, search tools, and document‑management systems are becoming standard, particularly in cost‑pressured in‑house departments.

By contrast, arbitrators, sitting closer to the decision‑making core, adopt AI more cautiously. Their role simply carries a higher risk profile: they may delegate administrative work, but not judgment.

Institutions, too, are positioning themselves. Recent announcements by the ICC and LCIA signal deeper engagement with AI, highlighting that institutions must be forward-thinking, technologically literate, and ready to address the next generation of disputes.

The Perils: A Patchwork of Governance and Underexplored Risks

But alongside this enthusiasm lies “responsible realism.” The question is no longer whether arbitration should use AI, but how it can do so responsibly. Emerging frameworks—from the EU AI Act to soft‑law instruments from SCC, VIAC, and CIArb—coalesce around shared pillars: mandatory human verification, disclosure duties, non‑delegation of core arbitral functions, and AI literacy.

Yet glaring gaps remain.

Bias—algorithmic and human—continues to haunt AI systems. Prof. Piers was particularly alert to affirmation bias: the subtle tendency to trust AI outputs that appear human, authoritative, and reasoned, even when wrong. The danger is not limited to hallucinations; it is the quiet erosion of critical judgment.

Another underexplored dimension is procedural equality. Not all parties have the same technological resources. If one side wields sophisticated AI tools and the other does not, is equality of arms still meaningful? The digital divide risks becoming a procedural divide.

Add to this the specter of synthetic evidence—deepfakes and AI‑generated deception—and the challenges grow profound.

The Cost Question: Who Captures the Value of AI?

Perhaps the most provocative part of Prof. Piers’ intervention concerned cost. Arbitration sells itself on two intertwined promises: efficiency and human expertise. AI unsettles both.

If AI makes arbitration more efficient, should proceedings become cheaper? Who should benefit from labour compression—the parties, the arbitrators, or simply overall productivity gains? Pricing structures built on pre‑AI assumptions are suddenly looking outdated.

Different fee models face different pressures.

  • Hourly systems (e.g., LCIA) must confront the fact that tasks formerly taking 10 hours may soon take 3.

  • Ad valorem systems (e.g., Cepani) tie fees to complexity, not time—but what if AI reduces complexity itself?

Fixed‑fee or capped‑fee models offer partial solutions, yet they bring their own risks. Over‑fixing fees too early may distort procedural choices and lead to a race to the bottom in arbitral craftsmanship. Negotiated models may offer greater transparency as to what counts as value, but they raise concerns of their own, particularly regarding transparency, bargaining power, and information asymmetry. AI may exacerbate these dynamics rather than resolve them.

Moreover, AI does not only reduce costs—it creates new ones: infrastructure, cybersecurity, redaction burdens, workflow redesign, senior‑level oversight, and training. The technological overhead is substantial, and not all practitioners—or parties—can afford it.

A System Under Pressure, but Not Without Agency

Arbitration has always been defined not just by outcomes, but by process. The “human element”—the craft, judgment, and legitimacy arbitrators bring to dispute resolution—sits at the heart of its value proposition.

AI challenges that identity without offering a clear substitute.

But perhaps the more immediate challenge is one of clarity. Clarity about what is being charged. Clarity about what is being delegated and what is being verified. Clarity about how value is redistributed.

Prof. Piers closed on a note of sober urgency: if arbitration fails to rethink its value definition and cost structures, it may face a credibility gap. The field is living in a post‑AI world, yet pricing, governance, and expectations still reflect a pre‑AI mindset.

The time to adapt is now—not because AI will replace humans in arbitration, but because it already reshapes how those humans work, decide, and deliver justice.

Contacter LE CEPANI

Rue des Sols 8 — B-1000 Bruxelles
info@cepani.be — +32 2 515 08 35
TVA BE 0413 975 115

Informations financières

BNP  BE45 2100 0760 8589 (BIC GEBABEBB)
KBC  BE28 4300 1693 9120 (BIC KREDBEBB)
ING   BE36 3100 7204 1481 (BIC BBRUBEBB)

Abonnez-vous à notre newsletter

s'abonner

FEB/VBO, Rue Ravenstein 4 — B-1000 Bruxelles
Lun – Ven 09:00h – 12:30h & 14:00h – 17:00h