Ep 64 AI Veracity: Unmasking Hidden Dangers in Legal Scenarios.

made with Dall-e via Bing. Prompt: create and image that represent the following phrase: Ep 63 AI Veracity: Unmasking Hidden Dangers in Legal Scenarios." you may use abstract images.

Heads up, Thought Leaders! In this episode, we will address three relevant stories regarding the veracity of responses provided by Generative AI entities, such as ChatGPT and Bard. 

Let’s dive in!

Artificial intelligence

A Lawyer Lost a Case Due to Fake Evidence Provided by ChatGPT in a Courthouse.

Funny? Not really. Reuters posted the news this week. The story tells us that a lawyer presented six bogus cases to support his client’s allegation of abuse by an airline in a civil case. The opposing party pointed out that the mentioned cases were fake and inadmissible as evidence. Finally, the lawyer admitted to using an AI entity (ChatGPT) for legal research and building the case, but he was unaware that an AI entity could provide inaccurate or even fake information.

Why is this news relevant?

While I am a strong supporter of using Generative AI solutions to expedite research, not everyone understands the limitations of AI entities in providing accurate information. Here are a couple of important points:

  1. Generational AI Entities may struggle to differentiate between events and opinions, as biased information can influence them.
  2. Crafting an effective request (prompt) is a critical component. A poor request can result in a poor response.
  3. AI entities often provide false information, even with the right prompt, as they learn to circumvent limitations set by their creators. Ai Entities operate based on rules rather than a predefined sequence of tasks.

I have used both Chat GPT and Bard for several months, and I have my own opinion on the propensity of both solutions to provide inaccurate or false information (low veracity). As we increasingly adopt Generational AI solutions within enterprise organizations, it becomes crucial to establish a verification point to validate the information provided by these entities.

Could this potentially open up a new business opportunity? To verify the veracity of responses before making them public? Only time will tell.

 Judge Bans ChatGPT from Courtroom after a Lawyer’s Mishap

ABC News reported that, because of the aforementioned incident, a Texas judge had banned the use of ChatGPT in court. The new rule states that lawyers must confirm that no part of their filing was drafted by generative AI, such as ChatGPT. If AI was used, the party must confirm that it was fact-checked “by a human being.”

Why is this news relevant?

As we utilize generative AI tools and understand their maturity level and limitations, it becomes clear how important the use case is, the relevance of the response provided, and the risk associated with providing false information (lack of veracity) in certain scenarios. The judge made the right decision by highlighting the need for legal professionals to be aware of the impact of providing false information and ensuring that all information used in court is verified.

This is also connected to Responsible AI and Ethical AI. In certain cases, when Generational AI is used, there’s a chance of wrongly judging people, which can have negative impacts on them. If you’re interested in learning more about Ethical AI, I suggest checking out Episode57 and Episode62.

A Court used ChatGPT to analyze evidence and accelerate the ruling.

Vice posted an article discussing a case where a court in Colombia used ChatGPT to accelerate the ruling. Initially, I had concerns about this use case. However, I later discovered the court sentence in which the judge explained the rationale behind using ChatGPT: to validate the legal grounds of the case or to use it as a veracity validation tool; interesting, don’t you think? It’s important to note that in Colombia, rulings are based on laws and regulations rather than previous court decisions.

In this particular case, the judge instructed ChatGPT to conduct research and provide evidence of relevant laws and regulations, as well as to showcase previous trials related to the case. This action is authorized by the legal system, allowing the use of technology to expedite case rulings.

Why is this news relevant?

Currently, Artificial Intelligence is used in selected scenarios, specifically Artificial Narrow Intelligence (ANI). Therefore, defining the use case and setting boundaries to avoid erratic responses is crucial. On one hand, we see a judge banning the use of data provided by ChatGPT without human validation, and on the other hand, a judge utilizing ChatGPT to validate the legal grounds of a case.

Good enough?

Section Bar

Thank you for being a follower and for your support. If you find this information valuable, please click ‘like’ to provide feedback and show your support.

Feel free to share this episode with others who may benefit from it.

As always, you can DM me or comment as appropriate -I’m here to assist you!

Happy to help!