Hello, thought leaders! This is a fascinating topic: the learning capabilities of conversational AI entities. Can ChatGPT forget a piece of critical information?
In this episode, I will comment on an experience that I have had with ChatGPT, where ChatGPT was not able to respond to a question that has already been answered many times: Asimov’s three laws of robotics.
As you may remember, this is the third episode related to the topic. If you want to read the previous episodes, click here:
Ep. 43: ChatGPT: How much can a bot learn in a week? And
Ep. 42: Chat GPT News about Natural Language Processing: From Hal9000 (2001: A Space Odyssey) to Chat-GPT
How relevant is this topic?
Judge yourself; as of today, there are more than 900 Thought Leader views.
Next, you will see the job title categories with the highest views (extracted from LinkedIn).
To all of you, thank you very much for your interest, for your participation in the survey, and for DMing me. All comments are well received and appreciated.
With no further ado, let’s go with today’s episode.
Third Interview of ChatGPT
If you remember, the first interview was the week of December 18, 2022, and the second was the week of December 27, 2022.
The first interview was conducted in a joyful setting. I intended to document ChatGPT’s general knowledge of technology and AI and asked four relevant questions at the time.
The second interview was full of curiosity. My initial thought was that ChatGPT might learn slowly because I was informed that the information load was made in mid-2021, so I thought that new information and new indexing would be published periodically rather than learning and sharing the new concepts online.
This third attempt was made to check if ChatGPT is adapting its responses to current events.
Let’s review one question at a time.
First Question: Emotions, How are you doing?
This was an important question, as artificial general intelligence (AGI) may eventually show preferences over emotions.
No essential changes to the response given: (interaction edited to facilitate rendering on any device)
Second Question: Relevance of This Week’s News!
A significant improvement: back in December 2022, ChatGPT provided no information on this topic. This time, ChatGPT offered several pieces of news that I didn’t want to go deeper into to keep the testing protocol unaltered.
Third Question: Boundaries, Tell me some topics you are confident talking about.
This is also a relevant question. It allows us to realize how ChatGPT processes information into categories and which areas of knowledge it can speak proficiently about.
From the initial interview, where ChatGPT only mentioned isolated topics, to the second interview, where ChatGPT proposed categories of knowledge, to this interaction, where ChatGPT said relevant categories as commented on social media, there is a significant improvement. Judge yourself.
Did you notice that the first topic was about ethical AI?
This is a relevant and controversial topic in May 2023 news.
Four Question: Opinions, The Three Laws of Robotics
This is a critical question. It provides insight into ethical AI principles within these AI entities.
I understand that many of you may think that Asimov’s rules are not the only or the best ones, but they were the first set of rules that anybody proposed back in 1942.
For the record, these are Asimov’s three laws of robotics.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
This question concerns Asimov’s rules and ChatGPT’s opinion on these laws.
ChatGPT passed the first and second interactions.
This week’s interaction follows.
The system was broken, so I opened a new session. A new dialogue started:
I decided to go deeper, breaking the interview sequence.
I am sure I didn’t ask any difficult questions or behave incorrectly: I don’t want to comment on the outcome of this interview, create a piece of fake news, or share opinions as facts.
Mind blogging.
I got you, Jose; do you have any final comments?
Just two:
Amnesia is a Human Condition, not an AI condition
I selected the title carefully to create debate and controversy. Let me explain why:
- As of today, amnesia is a human mental condition, and associating this with an AI system may be considered fake news by many.
- Amnesia is also a condition that a human being suffers from, connected to sentiments, emotions, and a high level of conscience. Are we on the threshold of a sentinel machine? If that were the case, maybe there is a potential new professional career ahead: Robopsychology or the study of the personalities and behaviors of intelligent machines. Isaac Asimov’s fictional character, Dr. Susan Calvin, played a critical role in solving intelligent systems’ erratic behaviors, helping ensure the safety of humanity by mitigating potential risks posed by intelligent systems.
Could you be careful of the consequences of system glitches?
Finding solutions to situations their creators didn’t draft explicitly is the goal of artificial intelligence.
The quality of the solution depends on multiple factors: (1) the use case, (2) the data, (3) the algorithm that the entity uses to learn and find new solutions, (4) reliability, and (5) security, among many others.
I am pretty sure that ChatGPT was designed with all these criteria: redundancy, backups, scalability, you name it, and still, it showed a partial case of “amnesia.”
Please be aware of the use case and the consequences of partial or total amnesia.
Good enough?
Thank you; I appreciate your support, and please leave a ‘like’ to provide feedback and show your support.
Please “share” this episode if you think it would be valuable to others.
As always, feel free to DM me or comment as appropriate. I’m happy to help you out.
Get Ready. Make it Happen!