Ep. 57 Fake news, beware! (Ethical AI)

Heads up, Thought Leaders!

In this episode, we will address an important topic: the potential impact of fake news made by Conversational AI solutions.

I am a regular writer of Ethical AI, and recent events have created a new dimension connecting AI with Fake News that can be minimized with ethical AI.

I think that protecting information is a collective effort.

Would you be interested in this topic? In that case, I hope to provide enough evidence about my point of view and the relevance of recent news about key AI players disinvesting in internal Ethical AI organizations.

If you want to read my previous episodes related to AI and Ethical AI, you may click on the AI icon below.

I understand everybody has opinions, and I would love to read yours. I invite you to either DM or post your thoughts in the comment section. You are welcome to do so.

Let’s go!

Artificial intelligence

Original Fakes news

When I was young, some decades ago, I was taught that Fake News was associated with people spreading lies because of a hidden agenda. Many times, these scams were political or economic. Nowadays, however, conversational AI solutions create a new category of Fake News.

As a result, due to some of the challenges faced by AI, a discussion must occur about the following:

  1. Immature algorithm
  2. Biased information on the training data showed prejudice or preference of the trainer.
  3. Biased algorithm showing prejudice or preference of the trainers.
  4. Advanced and mature technology capable of creating content showing events that never existed. (deep fake)

These scenarios can be dangerous when fake news is spread using social media.

I get you, Jose. Can you show me some cases?

Let’s look at a case that everybody can relate to food and nutrition. (Table courtesy of ChatGPT. Topic and headlines proposed by me)

Table 1 show a case related with eating healthy, showing how this may be used by third parties to create fake news and fake opinions.

Now let’s look at some scenarios where opinions can be presented as facts and exposed via rumors.

table 2 shows several facts related with eating well, but modified by third parties to create opinios based on rumors.

New ways of creating Fake News: Deepfakes

From these “original sins,” we now have a new scenario:

Fake news created with deepfake technologies, a new family of video and audio technologies capable of showing false information with the look and feel of a piece of real news.

I choose a weather event because it will be simple to analyze, but imagine the impact of a case related to a political event (election outcome) or a social event (social revolt):

Table3 show weather related scenarios, to show how a real event  can be presented as a different reality by using deepfake technologies.

Final Category of Fake News: Conversational AI

In this case, several factors may allow a Conversational AI solution to provide information as a fact when it is an opinion, or worse, a piece of information supported by a piece of evidence that is fake or non-existent.

OK, Jose, any example?

I made this table with ChatGPT assistance: ChatGPT chose the topics with no recommendation from my side.

Table 4,  shows scenarios where  conversational AI present events based on information that can be considered an opinion and not a fact by certain opinion groups. (economic, health care, sustainability, political).

Did you get the point?

Did you notice that some of the topics chosen by ChatGPT as “facts” are considered “an opinion” by certain audiences?

ChatGPT decides to tag the content as Real News based on many factors. I want to highlight three:

  1. Data provided by the trainers’ reflects their bias and personal preferences.
  2. Data provided by sources: conversational AI tools learn from each interaction with users, and they can modify their perception of facts and opinions based on these interactions. (I wrote my experience with ChatGPT documenting how much the tool learned in one week in Ep 43)
  3. The valorization algorithm set by Data Scientist: This algorithm may assign a higher value to some sources of information that -at times- show their political or economic view of current events as facts and not opinions.

Ok, Jose, I follow you, but what’s the point?

Protecting the Truth is a collective responsibility.

I would love to say that there should be a single point of contention, an entity that can keep the information accurate. Still, I showed that opinions and facts could be altered in many instances: At the AI-generated solution, at the user level, misrepresentation of reality by third parties, anywhere.

It is a collective responsibility.

AI organizations having Ethical AI committees is a great way to avoid mistakes at the origin and monitor and avoid the wrongful or harmful use of their solutions.

Eventually, the government will play a role in regulations if we fail to do our responsibility.

As of today, open your senses, and look for evidence before accepting opinions as facts.

Any last thoughts?

Yes, just one,

I experienced a couple of situations where I asked conversational AI tools (both ChatGPT and Bard) for information, and they gave a response with a fake description and fake evidence.

I think that in these cases, I have experience areas of knowledge or skills where the Conversational AI solution is not ready to provide information (some sort of immaturity) and by pushing them to try, they provide fake information.

Maybe you have your own experience with these new solutions, and I will love to read about your experience, and lessons learned.

Feel free to DM me or share your comments with the audience.

Good enough?

Wow, this took a little longer than the previous episodes—lots of information to digest!

I hope you find it useful.

Section Bar

Thank you for being a follower and for your support.

Please click ‘like’ to provide me feedback and a word of support.

Please share this episode if you think this information would be valuable to others.

As always, feel free to DM me or comment as appropriate.

Happy to assist.

One comment

Comments are closed.