Heads up, Thought Leaders! In this episode, we will address three relevant news stories on AI use cases, including two positive approaches and one documenting AI risks.
Let’s dive in!
US Food and Drug Administration Statement on AI/ML Use for Drug Development
The FDA recently published a Science and Research Whitepaper expressing its interest in learning and supporting safe use cases for drug development using AI/ML. Great AI use case.
Why is this news relevant?
Many experts believe AI/ML is an excellent tool for developing personalized drugs. In numerous cases, the creation of a drug relies on evaluating various factors, such as time, geospatial molecular coupling, and the person’s genetic code (DNA, RNA). AI/ML can assist in identifying solutions -that may not have been initially considered by its programmers- due to its ability to operate within predefined rules instead of a specific task sequence.
Imagine the potential acceleration in finding cures for incurable diseases (often hindered by lack of funding for traditional research) through AI/ML. This is just the beginning, and more exciting developments are expected.
ADP’s approach to Conversational AI
ADP has recently published a highly interesting article on its website, focusing on incorporating Conversational AI to improve its business operations ethically. Great approach for an AI use case.
Why is this news relevant?
There are two valuable lessons to be learned here:
- Incorporating disruptive technologies, such as Conversational AI (e.g., GPT, LLM Models), is a fair practice as part of Digital Transformation Initiatives.
- Adopting an Ethical Approach is essential. Ethical AI should be a consideration for all stakeholders involved, including solution creators, strategists, implementation teams, and regulators.
As a bonus: ADP has documented several topics to consider when implementing Conversational AI, including Human Oversight, Explainability and Transparency, Mitigating Bias, Operational Monitoring, building a culture of Responsible AI, and Diversity and Inclusion. It’s worth reading!
Center for AI Safety (CAIS) on eight examples of AI Risk
CAIS published an article highlighting eight risk categories associated with the misuse of AI.
Why is this news relevant?
This important document sheds light on philosophical and practical scenarios where AI can make decisions impacting humanity. This article delves into critical concerns, from the weaponization of AI by malicious actors to the potential enfeeblement of humans as we become overly reliant on AI for even simple tasks.
It’s an insightful read that shouldn’t be missed.
Good enough?
Thank you for being a follower and for your support. If you find this information valuable, please click ‘like’ to provide feedback and show your support.
Feel free to share this episode with others who may benefit from it.
As always, you can DM me or comment as appropriate -I’m here to assist you!