Amazon's Machine Learning University (MLU) recently introduced a new course, "Responsible AI — Bias Mitigation & Fairness Criteria."
In the free, publicly accessible online program, students learn various dimensions of responsible AI, how to prepare data, mitigate bias during model training, and many other aspects of bias mitigation and fairness.
There's a brand new course from Amazon Machine Learning University all about Responsible AI!
— Brooke Jamieson (@brooke_jamieson) December 19, 2022
More info and sign up here: https://t.co/47rPup3SzX @AmazonScience @awscloud @awsdevelopers#machinelearning #artificialintelligence #ResponsibleAI #MachineLearningUniversity pic.twitter.com/1cDEZDRHCD
The course complements Amazon Web Services' new AI Service Cards, which offer responsible AI documentation on intended use cases and limitations.
Mia Mayer, a data scientist and MLU instructor who developed the responsible AI course, has explored bias issues in her own research. Here, she discusses the curriculum and what students are learning.
- Q.
Tell us about the responsible AI course. Who can take it, and how is it structured?
A.Responsible AI is an entry-level course targeted at technical individuals with the goal of explaining where bias in AI systems comes from, how to measure it, and ultimately how to mitigate bias as much as possible.
Responsible AI - Welcome to Day 1!You don't need any machine learning knowledge to take the course, though some familiarity with Python programming and high-school-level math is helpful. In addition to the recorded lectures, we have white papers, code samples that leverage AWS services, and other resources available for students online. For a final project, students implement their own bias mitigation technique of choice to reduce disparity in model outcomes for different subpopulations.
Responsible AI (2/30) - Machine Learning FundamentalsThe course provides a lot of foundational material about how to build a machine learning model, so it's a good segue into all of the other courses that MLU offers, from decision trees and ensemble methods to natural language processing.
- Q.
What led you to add this course to MLU's offerings?
A.The course was driven by a business need, as well as a personal passion. In my own work, as I was getting more exposure to different machine learning projects, I noticed that a lot of the individuals in the room were men. That started the questions in my mind: “Are other identities being considered to the same extent as men? Do we have enough diversity and representation? What issues could occur when machine learning solutions are developed predominantly by one particular population?”
Responsible AI (3/30) - Bias in different Machine Learning TasksOn the business side, we see a growing number of regulatory requirements, such as General Data Protection Regulation (GDPR) and The AI Act in the European Union, or the Principal Reasons Framework in United States. This has definitely led to more interest in this topic.
Equally important as being compliant with regulations, it is our goal at AWS to use and develop ML & AI systems responsibly. Ultimately, measuring and mitigating bias is necessary to build trust and evaluate risks of AI systems and models. Failing to mitigate bias can lead to loss of trust and disadvantage subgroups of customers.
- Q.
Why this topic?
A.Machine learning is growing so much and so quickly — and it's expected to grow even more, with global spending on AI-based technologies expected to reach $204 billion by 2025. It touches on many aspects of our customers' lives.
We want to make sure that machine learning models and APIs are developed responsibly and also used responsibly. This course complements the new Amazon leadership principle: "Success and scale bring broad responsibility."
- Q.
How did you go about putting together the course?
A.I wanted to cover bias aspects at every stage of the machine learning life cycle. When starting to collect material for the course I noticed that there wasn’t any freely available class covering the full ML pipeline in both theory and code.
Responsible AI (5/30) - Fairness throughout the ML LifecycleMany other courses only focus on one subcomponent, such as measuring bias before you train a model. I wanted to give students practical, hands-on skills and code examples for every stage of the life cycle, from ideation of the machine learning problem all the way to deployment.
- Q.
What's an example of how bias can create issues in machine learning?
A.A common machine learning problem that people try to tackle is classification, where the model provides different classes of outcomes, such as "approved" or "denied," or whether somebody will be shown an ad or not shown an ad.
A machine learning model could perform much better for one subpopulation, comprising individuals with different attributes, than for another subpopulation. It's all about having some measure of fairness that we want to enforce across different subpopulations to reduce disparity as much as possible.
- Q.
What are you hearing from students so far?
A.Overall, we are getting really positive feedback, and it's one of our most engaged classes in terms of how much discussion is happening.
Responsible AI (6/30) - Model Formulation and Data CollectionAn "aha" moment for a lot of students is that you can make an algorithm that's fair, but that doesn't mean it's high-performing. For example, you might have a model that denies all applicants. Technically that is fair — everyone receives the same outcome — but it’s obviously also undesirable. Students are often surprised to learn you need to have two metrics to evaluate a machine learning model: performance and fairness. You cannot do one or the other.
- Q.
What do you want students to take away from this class?
A.That there is no one right way of doing things. There are many different bias mitigation techniques, and this holds true for every component of the machine learning life cycle. It's all about trying to understand where the bias comes from and not blindly assuming that there isn't any bias.
I also want students to realize that there are scientific methods they can use in practice that can help mitigate bias. A lot of the time, people observe and even quantify bias issues, but they don't really know what to do about them. The science is very new, but it's making huge steps forward, and it's already at a point where it can be used in practice to mitigate bias.