In the evolving landscape of white-collar litigation and insolvency investigations, generative AI is rapidly becoming a tool of choice for legal teams seeking speed, scale, and precision. 

Yet as courts begin to scrutinize its use, a clear message is emerging: efficiency must not come at the expense of integrity. The Quebec Superior Court’s decision in Specter Aviation Limited c. Laprade marks a pivotal moment, the first reported sanction in Quebec for improper reliance on AI-generated legal content. 

The case underscores a growing judicial intolerance for unverified outputs and signals the need for disciplined governance in AI-assisted legal work.


Quebec’s first reported sanction for improper AI use

In Specter Aviation Limited c. Laprade1, the Superior Court of Quebec homologated a Paris arbitral award and, in doing so, addressed what appears to be the province’s first reported sanction for improper use of generative AI in court proceedings. Self represented at this stage, the defendant filed a contestation citing numerous authorities that, upon review by opposing counsel and the court, were found to be fictitious. When questioned, the defendant admitted to relying on “the full power” of AI tools to prepare his submissions.

Applying section 342 of Quebec’s Code of Civil Procedure, the court found a substantial breach in the conduct of proceedings and imposed a $5,000 punitive sanction. The judgment emphasized that access to justice considerations for self represented litigants cannot justify using fabricated authorities. 

Drawing on its 2023 public notice warning the profession about hallucinated legal sources, the court reiterated two essential guardrails: prudence and rigorous human verification. While acknowledging AI’s potential to enhance access to justice, the court stressed that unverified outputs waste time, burden opposing parties, and risk misleading the tribunal. Sanctioning was necessary to preserve procedural integrity and deter recurrence.

AI pitfalls are also documented elsewhere in Canada 

Two recent decisions elsewhere in Canada reinforce these concerns and demonstrate that courts are moving swiftly to address AI‑linked abuses.

First, in Zhang v. Chen2, the Supreme Court of British Columbia faced the fact that counsel inserted two non‑existent cases (later shown to have been invented by a generative AI tool) into a filed notice of application in a family matter. 

The court characterized the citation of fake cases as an abuse of process tantamount to a false statement to the court and reiterated that generative AI is “no substitute for the professional expertise that the justice system requires of lawyers” (at para 46). While the court declined to order special costs personally against counsel, finding no intent to deceive, it nonetheless held the lawyer personally liable for the opposing party’s additional costs incurred to investigate and address the fabricated authorities.

Second, in Lloyd’s Register Canada Ltd. v. Choi3, the Federal Court ordered the removal of a motion record from the court file after the self‑represented respondent relied on non‑existent authorities and failed to comply with the court’s AI practice direction requiring disclosure when AI is used in filings. Citing earlier warnings from provincial and federal decisions, the court emphasized that the undeclared use of AI (especially where filings include hallucinated citations) is a serious abuse warranting procedural remedies. The court awarded costs and underscored that even self‑represented parties must ensure accuracy of citations.

Implications for investigations and insolvency practices 

Across practice areas, law firms and in house teams are adopting AI responsibly to accelerate routine tasks, improve quality control, and reduce costs. Document classification, deduplication, first level review, chronology building, and issue spotting are already being enhanced with supervised AI workflows. In litigation support, guardrails that combine disclosure and human oversight are enabling meaningful efficiency gains without compromising accuracy. The lesson from recent case law is not to retreat from technology, but to professionalize its use.

Internal Investigations

In internal investigations, AI can cluster documents, flag suspicious communications, and raise anomalies. But human validation remains essential. Inaccurate summaries or hallucinated quotes can mislead decision-makers, regulators, and enforcement agencies. Misuse of AI is not merely a procedural issue − it can engage criminal liability.

Under the Canadian Criminal Code, familiar offences map onto AI‑related conduct when the requisite actus reus and mens rea are present. 

Fraud (s. 380) may capture AI‑assisted deceit that risks economic prejudice. Forgery (s. 366) can apply where a person knowingly creates or alters false documents; think deepfakes, fabricated quotes, or AI hallucinations adopted with knowledge of their falsity and intended to induce action. Falsification of books and documents (s. 397) could be implicated when AI is used to craft or alter accounting and audit records with intent to defraud. Theft (s. 322) is narrower for data, which is generally intangible; however, wrongful “conversion” of computer data can still be argued in certain circumstances.

Organizations are not insulated. An entity’s culpability turns on the state of mind of a “senior officer”: an organization is liable where a senior officer participates in the offence, causes an agent to do so, or knowingly fails to prevent it.

Insolvency

In insolvency matters, AI is increasingly used to identify preferential payments, related-party transactions, and anomalies in financial records. But if those flags are wrong, trustee reports and court filings may be compromised. Courts are beginning to question the provenance of AI-assisted financial analyses.

In restructurings, AI can help reconcile claims, analyze cash flows, and detect unusual payment patterns across accounting ledgers, bank records, and communications. Properly governed, these tools accelerate asset tracing, vendor and intercompany mapping, and identification of related party dealings, supporting faster and more informed strategic decisions.

Insolvency professionals, including counsel and court-appointed officers such as trustees, receivers, and monitors, play a pivotal role in ensuring the proper and efficient functioning of the insolvency system. Their work is essential to advancing the policy objectives embedded in insolvency legislation. Monitors, in particular, are often described as the “eyes and ears” of the court. In a system characterized by real-time litigation, rapid developments, and numerous stakeholders, the court relies on these officers and needs those “eyes and ears” more than ever to provide timely, accurate, and impartial information.

Equally, the court depends on the integrity and reliability of submissions made by insolvency practitioners during fast-paced proceedings. An implicit trust underpins this relationship, and maintaining it is critical to the system’s credibility.

In this context, integrating artificial intelligence into insolvency proceedings and restructurings holds significant promise. If harnessed responsibly, AI can enhance efficiency, improve decision-making, and support the work of insolvency professionals. However, misuse or premature reliance on AI tools risks undermining the very trust that courts place in these officers, potentially compromising the integrity of the process and eroding confidence in the system before its benefits are fully realized.

Conclusion

Laprade isn’t just a cautionary tale, it’s a precedent. For professionals in white-collar defence and insolvency, where credibility and precision are non-negotiable, the decision draws a clear line: AI can assist, but it cannot replace judgment, verification, or accountability.

The sanction imposed in Laprade confirms courts will not tolerate fabricated authorities, even from self-represented parties. For counsel, the implications are sharper: any AI-assisted output that enters the record must be vetted with the same rigour as traditional sources.

In investigations and insolvency, where AI is increasingly used to analyze financial data, flag anomalies, and support strategic filings, the risk is real. A single hallucinated citation or flawed analysis can compromise an entire investigation.

The takeaway is simple: AI is a tool, not a shield. Use it to accelerate, not to abdicate. Laprade makes clear that procedural integrity is paramount, and the cost of cutting corners is no longer theoretical.


Footnotes

1  

2025 QCCS 3521 (Laprade).

2  

2024 BCSC 285 (Zhang).

3  

2025 FC 1233 (Choi).



Contacts

Partner, Canadian Head of Restructuring
Partner
Associate

Recent publications

Subscribe and stay up to date with the latest legal news, information and events . . .