We need to design mistrust in AI systems to make them safer
It is interesting that in these types of scenarios you need to actively design distrust of the system in order to be more secure.
Yes, that’s what you need to do. Actually, we are trying to do an experiment around the idea of denying service. We have no results yet, and we are struggling with some ethical concerns. Because after talking about it and publishing the results, we need to explain why you don’t even want to give AI the ability to deny you a service. How do I remove the service if someone really needs it?
But here’s an example of Tesla’s distrust. Denying service would mean that I create a profile of your trust, depending on how many times you have turned it off or disconnected to hold the wheel. Given these disconnect profiles, I can then model at what point you are completely confident in this situation. We did that, not with Tesla data, but with our data. At one point, the next time you get in the car, you would be denied service. You do not have access to the system at time X.
It’s almost like punishing a teenager by removing your phone. You know that teens won’t do what you want them to do if you associate it with their mode of communication.
What other mechanisms have you explored to increase mistrust in systems?
The other methodology we have examined is roughly called explanatory AI because the system provides an explanation of some of its risks or uncertainties. Because all of these systems have uncertainty – none of them are 100%. And a system knows when to be safe. So in a way that information can be understood by humans, people will change their behavior.
As an example, let’s say I’m a car driver, I have all the information on the map and I know that some intersections have more accidents than others. As we approach one of them, I would say “we are approaching a crossroads where 10 people were killed last year”. You explain it in a way that someone tends to say, “Oh, wait, maybe I should be more conscious.”
We’ve already talked about some of your concerns about the tendency to trust these systems. What are the others? On the other hand, are there benefits too?
Negatives are really related to bias. That’s why I always talk about bias and trust. Because if I trust these systems too much and these systems are making decisions that have different outcomes for different groups of individuals — for example, a medical diagnosis has differences between women and men — we are creating systems that exacerbate the differences we have today. . That is a problem. And when you associate it with things related to health or transportation, either or both of them can lead to dead or alive situations, a bad decision can lead to something you can’t really recover from. So we really need to fix it.
They are positive that automatic systems are generally better than people. I think they can be has better, but I personally prefer to interact with an AI system in some situations, other times with humans. I know he has some problems, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you are a beginner person. The result is better. The result may not be the same.
In addition to your robotics and AI research, you are committed to increasing diversity throughout your career. You started a program to mentor at-risk junior girls 20 years ago, which is what many people thought about this issue before. Why is this important to you, and why is it important to the pitch?
It’s important to me because I can identify the times in my life when someone basically gave me access to engineering and computer science. I didn’t even know it was a thing. That’s why, later on, I never had a problem knowing that I could do that. So I always felt it was my responsibility to do the same for those who did it for me. As I got older, I noticed that there were a lot of people in the room who weren’t like me. So I realized: wait, there’s definitely a problem here, because people don’t have models, they don’t have access, they don’t even know that’s a thing.
Why it’s important on the field is because everyone has a different experience. One thing was like I was thinking about human-robot interaction before. It wasn’t because I was great. The problem was because I looked the other way. When I’m talking to someone with a different perspective, it’s like, “Let’s try to combine and represent the best of both worlds.”
Airbags kill more women and children. Why is that? Well, I’ll say that because someone wasn’t in the room saying, “Hey, why don’t we test this on the women in the front seat?” There are a lot of problems that have killed or endangered groups of people. And I would proclaim, if you step back, “No, have you thought about it?” because they are speaking from their experience and from the environment and community.
How do you expect evolution with AI and robotics over time? What is your vision for the field?
If you think about coding and programming, almost anyone can do it. There are many organizations like Code.org now. Resources and tools are there. One day I would like to have a conversation with a student and ask, “Do you know AI and machine learning?” and they say, “Dr. H, I’m in that third grade! “I want to be surprised like that, because it would be wonderful. Of course, then I should think about what my next job is, but that’s another story.
But I think when you have tools with coding and AI and machine learning you can create your own jobs, you can create your future, you can create your own solution. That would be my dream.