Machines that can learn, can (and almost always do) learn to be biased. What are the ethical, social, and business ramifications of this? Participants will gain an accurate (but non-math, non-engineering) understanding of how bias seeps into AI models, even if the computer algorithms are coded in a non-biased way. They will know what transparency is ( the ability to show why a computer made certain decisions and recommendations) and how transparency may be achieved (in some cases). The will know the best ethical practices for managing bias in business uses of AI.
Capability to understand how bias gets into the AI models of businesses that use AI,
and to participate responsibly in corporate decision-making when appropriate.
Hour 1 – By the end of this hour participants will be able to articulate the ways that bias gets into AI models.
Hour 2 – By the end of this hour, participants will be able to say what transparency means in the context of AI, articulate the limits of ethics panels in AI, and make a case for different approaches to implementing ethical principles.
Hour 1: Where Does Bias Come From?
* Case study 1: Amazon’s AI recruiting tool
* Case study 2: Clearview AI (see https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html )
Hour 2: Transparency, Ethics, and Moving Beyond Ethics Committees
* Case Study 1: Bad Recommendations in Healthcare Risks (see https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/ )
Anyone interested in topics regarding AI and Ethics.