To give AI-focused female academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching an interview series focusing on the remarkable women who have contributed to the AI revolution. We will publish many articles throughout the year as AI continues to flourish, highlighting key works that often go unrecognized. Read more profiles here.
Sandra Wachter is Professor and Senior Researcher in Data Ethics, Artificial Intelligence, Robotics, Algorithms and Regulation at the Oxford Internet Institute. She is also a former Fellow of the Alan Turing Institute, the UK's national institute for data science and artificial intelligence.
While at the Turing Institute, Wachter evaluated the ethical and legal aspects of data science, highlighting cases in which opaque algorithms became racist and sexist. It also looked at ways to audit AI to address misinformation and promote fairness.
Question and Answer
In short, how did you get started in the field of artificial intelligence? What attracted you to the field?
I can't remember a time in my life when I didn't believe that innovation and technology had amazing potential to improve people's lives. However, I also know that technology can have devastating consequences on people's lives. So I have always been driven – not least by my strong sense of justice – to find a way to ensure that perfect middle ground. Enabling innovation while protecting human rights.
I have always felt that the law has a very important role to play. Law can act as an enabling middle ground that protects people but enables innovation at the same time. Law came as a very natural system to me. I like challenges, I like to understand how the system works, see how I can game it, find loopholes and then close them later.
Artificial Intelligence is an incredibly transformative force. It is implemented in the areas of finance, employment, criminal justice, immigration, health and art. This can be good and bad. Whether good or bad is a matter of design and policy. I was naturally drawn to it because I felt that law could make a meaningful contribution to ensuring that as many people as possible benefit from innovation.
What work are you most proud of (in the field of artificial intelligence)?
I think the work I'm most proud of at the moment is an article co-authored by Brent Mittelstadt (philosopher), Chris Russell (computer scientist) and myself, the lawyer.
Our recent work on bias and fairness, “The Injustice of Fair Machine Learning,” has exposed the harmful impact of imposing many “group fairness” measures in practice. Specifically, justice is achieved through “levelling,” or making everyone worse off, rather than by helping disadvantaged groups. This approach is highly problematic in the context of EU and UK non-discrimination law, as well as ethically troubling. In a Wired article, we discussed how undercutting can be harmful in practice — in health care, for example, enforcing group fairness could mean missing more cancer cases than is absolutely necessary, while also making the system less accurate overall.
For us this was terrifying and something that is important to know for people in technology, politics and every human being really. In fact, we have engaged with UK and EU regulatory bodies and shared our alarming findings with them. I very much hope that this will give policymakers the leverage needed to implement new policies that prevent AI from causing such serious harm.
How can you overcome the challenges of the male-dominated technology industry and, by extension, the male-dominated AI industry
The interesting thing is that I have never seen technology as something that is “property” to males. Only when I started school did society tell me that technology had no place for people like me. I still remember that when I was 10 years old, the curriculum required the girls to knit and sew while the boys built birdhouses. I also wanted to build a birdhouse and asked to be transferred to the boys' class, but my teachers told me that “girls don't do that.” I even went to the school principal trying to overturn the decision but unfortunately I failed at that time.
It's very difficult to fight the stereotype that you shouldn't be part of this community. I wish I could say that things like this no longer happen but unfortunately that is not true.
However, I was very fortunate to work with allies like Brent Mittelstadt and Kris Russell. I have had the privilege of having amazing mentors such as Ph.D. My supervisor and I have a growing network of like-minded people of all genders who are doing their best to guide the path forward to improve the situation for everyone interested in technology.
What advice would you give to women who want to enter the field of artificial intelligence?
Above all, try to find like-minded people and allies. Finding people and supporting each other is crucial. My most impactful work has always come from talking with open-minded people from other backgrounds and disciplines to solve common problems we face. Accepted wisdom alone cannot solve new problems, so women and other groups that have historically faced barriers to entry in AI and other areas of technology have the tools to innovate and deliver something truly new.
What are some of the most pressing issues facing AI as it develops?
I believe that there is a wide range of issues that need serious legal and political study. To name a few, AI suffers from biased data that leads to discriminatory and unfair outcomes. AI is by nature mysterious and difficult to understand, yet it is tasked with deciding who gets a loan, who gets a job, who should go to prison and who is allowed to go to university.
Generative AI has related issues but it also contributes to the spread of misinformation, is full of hallucinations, violates data protection and intellectual property rights, puts people's jobs at risk, and contributes more to climate change than the aviation industry.
We have no time to waste; We need to address these issues yesterday.
What are some issues that AI users should be aware of?
I think there is a tendency to believe a certain narrative along the lines of “AI is here to stay, join in or get left behind”. I think it's important to think about who is pushing this narrative and who is benefiting from it. It is important to remember where the actual power lies. The power is not in the hands of those who innovate, but in the hands of those who buy and apply AI.
So, consumers and businesses should ask themselves: “Does this technology really help me and in what way?” The electric toothbrush now has artificial intelligence (AI) technology built into it. For whom is this? Who needs this? What is being improved here?
In other words, ask yourself what's broken, what needs to be fixed, and whether AI can actually fix it.
This kind of thinking will change the power of the market, and hopefully move innovation in a direction that focuses on benefiting society rather than mere profit.
What is the best way to build AI responsibly?
Having laws in place that require responsible AI. Here, too, an unhelpful and incorrect narrative tends to dominate: that regulation stifles innovation. this is not true. Regulation stifles harmful innovation. Good laws foster and nurture ethical innovation; That's why we have safe cars, planes, trains and bridges. Society does not lose if it prevents regulation
Creating artificial intelligence that violates human rights.
It has also been said that car traffic and safety regulations “stifle innovation” and “limit autonomy”. These laws prevent people from driving without licenses, prevent cars that do not have seat belts and airbags from entering the market, and punish people who do not drive according to the speed limit. Imagine what the auto industry's safety record would look like if we didn't have laws regulating the movement of vehicles and drivers. AI is currently at a similar inflection point, and pressures from heavy industry and political pressures mean it remains unclear what path it will take.
How can investors better push for responsible AI?
I wrote a paper a few years ago called “How Fair AI Can Make Us Richer.” I strongly believe that AI that respects human rights, is unbiased, explainable and sustainable is not only the right thing to do legally, ethically and morally, but it can also be profitable.
I really hope investors understand that if they push for responsible research and innovation they will also get better products. Bad data, bad algorithms, and bad design choices lead to worse products. Even if I can't convince you that you should do the ethical thing because it's the right thing to do, I hope you'll see that the ethical thing is also more profitable. Ethics should be viewed as an investment, not an obstacle to be overcome.