Industry experts are saying that an upcoming court case in the B.C. Supreme Court could bring clarity and possibly create a new standard for using AI models like ChatGPT in Canada's legal system.
The case is getting a lot of attention because it involves fake legal documents made by ChatGPT and supposedly given to the court by a lawyer involved in a big-money family dispute. This is thought to be the first time something like this has happened in Canada, although similar situations have popped up in the United States before.
Jon Festinger, a professor at UBC's Allard School of Law, explained, "This case is serious because it's going to set an example and give us some guidance. We'll see this in a couple of ways. Firstly, there's the court deciding on costs... Secondly, the lawyer involved might face consequences from the Law Society for what they did. This case might also help us understand how much lawyers need to know about technology." The lawyer accused of submitting the fake cases, Chong Ke, is under investigation by the Law Society of B.C. The other lawyers in the case are also suing her personally, saying they should be paid for the work they did to find out the cases were fake.
Ke's lawyer says she made an "honest mistake" and that there's never been a case in Canada where someone had to pay special costs like this.
Ke said sorry to the court, explaining that she didn't know the AI chatbot wasn't reliable and didn't check if the cases were real.
Vered Shwartz, an assistant professor of Computer Science at UBC, thinks people don't know enough about the limits of AI tools. She said, "There's a big issue with ChatGPT and similar tools: they sometimes make mistakes, even though they seem right. They weren't trained to be correct, just to look human-like." ChatGPT's rules even say the content it makes might not always be correct.
Shwartz thinks companies making tools like ChatGPT should explain their problems better and that they shouldn't be used for important things like law.
She said the legal system needs more rules about using these tools, and until then, maybe they shouldn't be used at all. "Even if someone uses them just to help with the writing, they need to check the final work for mistakes," she added.
Festinger thinks lawyers need better training about what AI tools are good for.
But he's hopeful about the future of this technology. He thinks in about ten years, there could be better AI tools made just for law, which would be good for everyone's access to justice.
B.C. Supreme Court Justice David Masuhara is supposed to decide if Ke has to pay for the extra costs in the next two weeks.
What are the potential implications of the upcoming court case involving AI models like ChatGPT for Canada's legal system?
Why is the case receiving significant attention, particularly in relation to the creation of fake legal documents by ChatGPT?
How might this case influence the way lawyers are expected to navigate technology in their profession?
What consequences could the lawyer, Chong Ke, face as a result of the allegations against her?
Do you think it's fair for the other lawyers involved in the case to sue Chong Ke personally for their extra work? Why or why not?
How does Vered Shwartz's perspective on the limitations of AI tools like ChatGPT differ from Jon Festinger's optimism about their future development?
What are some potential risks associated with relying on AI tools for important tasks like law, according to Shwartz?
Do you agree with Shwartz's suggestion that AI tools like ChatGPT shouldn't be used in law until clearer rules are established? Why or why not?
How might improved training for lawyers regarding the use of AI tools contribute to the advancement of legal practices?
In your opinion, what could be the long-term impact of this court case on the integration of AI models into the legal system, considering Justice David Masuhara's impending decision?