fbpx

published on December 11, 2024 - 2:43 PM
Written by

Editor’s note: On Nov. 15, The Business Journal’s news team put together a roundtable discussion featuring representatives from several industries sharing their experiences using Artificial Intelligence. This is the first part of a three-part series analyzing some of the key areas in which AI is being used.

Artificial intelligence (AI) is one of the fastest-growing trends worldwide. Businesses are now utilizing AI to create more efficient workplaces.

It’s no surprise that businesses like Xobee Networks, Sandler Training, Coleman & Horowitt, LLP, and Anchored Web Solutions employ AI to enhance their companies in new ways.

AI has also started to impact the legal side, with attorneys using various forms of AI to help with research.

Associate Aisha O. Otori and Partner Sherrie M. Flynn of Coleman & Horowitt LLP, a law firm based in Fresno, discussed how AI has touched their practice and their interactions with clients.

Flynn, an attorney with Coleman & Horowitt since 2013, said many attorneys distrust AI. When they do use it, it is mainly for research purposes.

“It’s just amazing to me that I can start with Google, and I can type in a legal question into Google and it’ll come up with an answer,” Flynn said. “Now, I don’t trust that answer, but at least it gives me some framework to start.”

One of the most popular AI programs for attorneys is LexisNexis.

LexisNexis is a website for “law firms, corporations, government agencies and academic institutions seeking legal solutions, news and business insights.”

Although LexisNexis is a trusted resource, it doesn’t always give Flynn and Otori what they need.

“I was doing some research and asked a question in the LexisNexis system, and it came up with an answer and cited a particular act, but it didn’t actually give me the code section,” Flynn said. “I was like, ‘Okay, what code section is that act?’ It turned out I was researching copyright. It was a trademark code section, or vice versa, but it wasn’t applicable to the research I was doing.”

Despite a few flaws, LexisNexis is reliable more often than not.

“It makes it so much quicker and easier to get the research headed in the right direction,” Flynn said. “We still have to read the cases. We still have to synthesize it and apply it to the facts, but it makes research so much quicker and so much less expensive for our clients than it used to be when we really had to guess where to start or do a lot of fumbling to start.”

Sherrie Flynn, partner at Coleman & Horowitt, LLP, speaks about how AI is used at the firm. To the right is Coleman & Horowitt Associate Aisha Otori. Photo by Ben Hensley

 

Otori emphasized the importance of closely examining AI-generated information.

“I think the most important thing with us in using AI tools for research is that we have to ensure that whatever information we’re getting out of it is being vetted and making sure we are quoting or citing it correctly, especially in terms of the papers that are submitted to the court,” Otori said.

Otori gave an example of a lawyer who entered a case into ChatGPT without vetting the information received. The lawyer submitted the case to the court and got it sent back because some of the cases were non-existent. The lawyer ended up getting sanctioned.

“No lawyer wants to be a ChatGPT lawyer, right? I mean, that doesn’t work,” Otori said. “There’s just a lot of use cases for AI, but the main thing as lawyers is to make sure you’re vetting it and you’re analyzing it.”

AI is also used to draft patents, communicate with clients, and perform risk analysis.

One of the biggest questions surrounding AI is the uncertainty of where it falls when it comes to legal questions such as copyright.

ScoreDetect, a copyright protection website, states, “AI systems are not recognized legal entities that can hold rights. However, each image prompt represents a creative composition, requiring human judgment and decision making.”

Flynn believes that the law regarding AI copyright isn’t solid and that it needs to be more transparent about what AI can and can’t do. She said that at least eight AI-related laws will go into effect from January 2025 to January 2026.

Another concern of Flynn’s is that businesses need to tell employees what information should or shouldn’t be inputted into AI programs. If businesses aren’t cautious, they could lose their trade secrets when putting information into AI programs.

“You don’t want your employees dumping information into something like ChatGPT because then someone else can get that information, and you lose the trade secrets of your business, Flynn said. So you need to train employees too and have a policy in place as to what employees can use to make sure they even understand what is confidential information for your business and what’s not.”


e-Newsletter Signup

Our Weekly Poll

How much do you spend on an average lunch during the work week?
14 votes

Central Valley Biz Blogs

. . .