Since the debut in late 2022 of OpenAI’s generative artificial intelligence (Gen-AI) chatbot called “ChatGPT,” a new trend in the legal profession has emerged—obtaining research assistance from a myriad of Gen-AI models. Some models, such as Counsel AI Corporation’s “Harvey,” are specifically tailored to the legal profession. An example of a research inquiry that might be propounded to such a model would be: “Find me New York Commercial Division caselaw where the court declined to assist a judgment creditor because doing so would reward the evasion of court orders.”
Even as Gen-AI's sophistication progresses, a major concern endures: AI hallucinations. These troubling errors have not only been the source of embarrassment for attorneys nationwide but have also given rise to sanctions and even ethical complaints. In this column, we discuss two recent Commercial Division decisions addressing the implications of AI hallucinations and an offending attorney's likely exposure to sanctions. We also discuss a proposed Commercial Division rule change that addresses Gen-AI research issues.
Download the full New York Law Journal article, "AI hallucinations: Imaginary caselaw, real consequences."