Heads up, Thought Leaders, In Episode 54, we will comment on four thought-provoking news related to Responsible AI. This news will have long-term implications for many digital acceleration initiatives.
Let’s start.
Noam Chomsky on the impact of AI on Society
Futurism Newsletter posted a thought-provocative article called: “Noam Chomsky: AI isn’t coming for us all, you idiots!”, in which Mr. Chomsky, father of modern linguistics and Professor Emeritus of MIT, effectively recognize that even Chat GPT and Google Barn showed a quantum leap in AI capabilities.
In the interview, he also mentioned that these solutions are closer to Artificial General Intelligence (AGN, a solution proficient in multiple topics) rather than Artificial Narrow Intelligence, but he states that there is still an important gap to cover to match Human thinking and communication.
Why is this relevant?
I have a close friend that loves this phrase: The Truth is the Truth.
The Truth is these algorithms are a massive improvement in the capabilities an AI solution may show in processing information and communicating with humans.
The Word Economic Forum already documented that in the next ten years, millions of jobs will be impacted by these transformational solutions, and new jobs, today inexistent, will be available.
The good news is, there is a real opportunity for high-performance professionals to improve their performance using these solutions in data research and information and evidence collection.
The bad news is, there is also a real opportunity to avoid our responsibility to develop our skills into thought leaders by copy-and-paste content created by these solutions.
This is where responsible AI plays a role.
These solutions will open the Responsible AI categories to many areas affecting Solution Creators, data quality, privacy, and user consumption.
This is just the beginning, more to come.
Governments and AI regulation
The Guardian published an article commenting on the need for leadership and understanding from Governments so society can take benefits of AI without being exposed to harmful use cases.
Why is this relevant
This news is focused on attempts made by US Congress to develop and propose bills oriented to protect individuals or to create restrictions/regulations to avoid AI’s most threatening aspects: fueling fake news, biased opinions, hate, and violence, among others.
Very little success, but it seems that talking about these issues will force many lawmakers to develop a good understanding of the topics and explore new ways to create a Responsible AI attitude in both solution providers and consumers.
And this is just one country: EU Congress is also working on the matter.
Lots to do; work in progress!
A very controversial article. Must read!
I will love to get your opinion on the subject by either commenting on the forum or DM me. Thanks!
Building Fair AI Systems by using Machine Learning and Fair Data
Quanta Magazine published a controversial article that comments on the efforts of Arvind Narayanan, an ML Thought Leader, and Researcher, to build Fair AI systems by training them using quantitative methods and correcting data bias.
In essence, Mr. Narayanan’s approach is to simplify the training steps into what is critically necessary, so the fewer number of steps, the reduced chance of bias in the AI-based solution.
Why is this relevant
Think of this: Responsible AI is related to using technology to provide unexpected relevant answers to existing and new problems.
Responsible AI affects the Solution Creator (selecting the right use case to do good and avoid harm), the AI training process (quality data and quality ML process to avoid bias), and the end-user.
There is an important recent controversy on AI-biased responses due to biased data.
Thought provocative article. Food for thought!
Building an AI Primer using a federated approach: Healthcare case
Healthcare IT News recently published an article about the benefit of training an AI-solutions via an ML federated learning approach.
The article also documented a Healthcare solution developed with the recommended approach.
Why is this relevant
While I believe that building an AI solution is a collaborative approach and a team approach, where the team is comprised of business and IT, I like the reference to including multiple sources of data, of having a federated approach to training data and ML algorithms.
I also see a little gray area related to Responsible-AI, and it is not clear how they will proactively monitor the federated data sources, and ML algorithms to avoid data bias.
Interesting article thought provoking. This approach can be applied in other verticals.
Must Read.
Good enough?
I hope you find this Episode valuable and entertaining.
Thank you for your preference by reading this Episode. Always available to continue discussing this or any previous episode via DM or responding to your comments below.
Please “Like” this Episode if you enjoyed it.
If you think others may find value in it, please “share” it with them.
Get Ready. Make it happen!