Skip navigation EPAM
Dark Mode
Light Mode
CONTACT US

A Hard Look at Responsible AI: The Resonance Test 90

A Hard Look at Responsible AI: The Resonance Test 90

Responsible AI requires conversation: honest, thoughtful and inclusive conversation. To show you what we mean by this, we invited David Goodis, Partner at INQ Law, and Martin Lopatka, Managing Principal of AI Consulting at EPAM, for a spirited conversation. Together they investigate how harm figures into the equation, the nuances of having a human in the loop, the idea of informed consent in the age of GenAI and the all-important topic of autonomy: Goodis notes that though an AI system might not be “really making the decision,” it might be “steering that decision or influencing that decision in a way that maybe we're not comfortable with.” Lopatka himself says, “I feel like putting a human in the loop can often be a way to shunt off responsibility for inherent choices that are made in the way that AI systems are designed.” It’s a complicated and necessarily unfinished conversation about balancing individual autonomy and ensuring that organizations act in an ethical manner. If you’re creating or using some kind of AI, you will certainly want to listen. 

GET IN TOUCH

Hi! We’d love to hear from you.

Want to talk to us about your business needs?