Heads up, Thought Leaders! In this episode, we will address two Ethical AI news with long-term implications on business, life, and society.
Today’s episode focuses on Ethical AI, building solutions protecting copyright sources, and a review of the moral role of solution providers.
Let’s go!
Adobe Firefly may become a solution for Ethical AI art generation:
Ars Technica and Dataconomy are commenting on the impact of having an AI art generation solution that creates content pulling images from a licensed content public domain.
Why is this news relevant?
As you probably remember, I have spoken several times about AI art generation solutions, like Dall-E.
In Ep 19, we covered the issue of who owns the intellectual property of an AI-based system. We mentioned Dall-E (a solution owned by Open AI, which also owns Chat GPT) was controversial because digital artists considered their art pieces to be used to train the algorithm without their approval.
Even though there is no resolution to that dispute yet, Adobe’s approach offers a safe solution to people who want to explore ai-generated art in compliance with copyright regulations.
You may learn more about Ethical AI and Art by reading the following previous episodes: Ep 19 – “Who owns the Intellectual Property of an AI-based system?” and Ep 15 – “On the Threshold of Artificial Intelligence and Art”.
Microsoft and its decision to cut a key AI ethics team.
Ars Technica, and The Verge, among others, documented this event.
Why is this news relevant?
Initially, I did not comment on this news because I didn’t want to make a false statement.
This topic has been part of my agenda since 2018. I posted a series of articles about ethical AI on LinkedIn, focused on Google, IBM, and Microsoft’s position on this topic.
During my article about Ethical AI on LinkedIn, I commented on and celebrated the commitment of key solution providers, such as Google, Microsoft, and IBM, to develop solutions, provide assistance, and support to use cases in compliance with the ethical use of technology.
Since then, many other vital providers, such as Boston Dynamics, and Nvidia, have joined the crusade.
Did you know that the AI Nvidia AI developed, The Megatron Transformer, shared opinions in an open forum about ethical AI, saying that AI, in essence, can’t be moral because it is a technology and technology does not have an intention, but humans do?
So, we are in two scenarios:
- Solutions providers should take care of monitoring the use cases companies put in place with their solutions or,
- The Public Sector will take care of that, enforcing strong regulations to avoid the misuse of the technology.
Is the Public Sector capable of regulating AI?
Yes, think of this,
Why does an architect, lawyer, or physician need a license to work?
Because their decision has a social impact.
Well, if nobody cares about the consequences of misusing AI, who do you think is the entity that will step in?
Can you imagine if a customer needs a license to buy AI technology?
It will be like taking a Polar Bear Plunge on New Year’s day!
A scenario that only a few will enjoy.
I am very concerned with the implication of Microsoft putting aside its ethics committee.
More to come.
Good enough?
Thank you for being a follower and for your support.
Please click ‘like’ to provide me feedback and a word of support.
Please share this episode if you think this information would be valuable to others.
As always, feel free to DM me or comment as appropriate.
Happy to assist!