Livestream Recap: Hilke Schellmann, Author of The Algorithm, in conversation with Rebekah Tweed

All Tech Is Human was proud to host Hilke Schellmann, author of The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, and Why We Need to Fight Back Now, for a new installment of our Responsible Tech Author Livestream Series on Wednesday, January 24, 2024.

Schellmann joined All Tech Is Human Executive Director Rebekah Tweed to discuss her new book, The Algorithm. Schellmann and Tweed discussed the role of AI in hiring, workplace surveillance, performance reviews, the biases embedded within AI systems, unclear and unproven performance indicators AI tools are programmed to view as successful, and the ways we can push back against these systems.

You can view the full livestream now and read selected excerpts below!

Schellmann on how AI hiring tools work (and why their measurements don’t always align with qualified candidates):
These tools actually make hiring much faster and efficient. The company save a lot of money. So that's why [companies] often like these tools: because they speed up the process. We haven't found a whole lot of evidence that there's less bias in these tools, and that these tools actually pick the most qualified candidates. Because it turns out, if you look at facial expressions in job interviews and compare that to facial expressions that people had in job interviews previously, there isn't a whole lot of evidence this actually works. We don't have any science that says you need to have certain facial expressions to be successful in the job.

Schellmann on the role of AI in hiring screenings:
But the problem is there's actually no science [proving] that intonation of our voices in job interviews are actually meaningful to predict if we are going to be successful in the job. It's just something we see a whole lot in the world of AI. We see a lot of signals that we can track, right? We can track all the keyboards, the websites that I visit, the programs that I that I use on my computer, but it's actually a meaningful prediction if I'm successful or not. I doubt that [it can determine that] because a lot of people are successful in very different ways.

The responsibilities of companies using AI hiring tools and their opaque nature:
On the face of it, the tool works, right? It's like my phone. Either my phone makes phone calls or it doesn't. And I know if it's broken or not, but the tool still ranks people. It still takes in thousands of applications and ranks people. And even when I made all these tests they're not based on science. When I get the score, I was like, “Oh, well, maybe I am qualified or not.”

It's really hard to ignore the science or the math of the numbers. And it is kind of compelling. So the tool still ranks and sorts people. If you don't take a closer look, it looks like it works. I think that's the problem if you don't investigate, if you don't look closer, the problem is a lot of companies buy these tools because they want to save money and make this more efficient. They don't want to hire more people to now monitor their AI tools for hiring. That's like sort of counter. And you know the vendors have a really good story to tell. They have these technical reports [that include] pages of pages of academic studies and how they validated the tools and it's actually hard for me to understand, let alone others.

You're like, “Oh, sounds like they really thought about this,” and we don't have a lot of transparency. So it's not like when one company uses a tool, it doesn't work. This gets put in the public record. And then another company might actually be like, whoa, wait, if this company didn't work for them, why should we use it? We don't actually know that. So I think that's one of the big problems.”

Please note: The above passages have been lightly edited for readability.


About The Algorithm

"The Algorithm" details some of the ways in which these automated hiring tools are potentially filtering out qualified candidates based on criteria that seems questionably correlated with job fit -- for instance, how does analyzing candidates’ voices and facial expressions in order to compare them to historically successful candidates help determine which candidates are well-suited to a particular role?

📕 Now Available Everywhere: The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, and Why We Need to Fight Back Now

About Hilke Schellmann
Hilke Schellmann, is an Emmy award winning investigative reporter and assistant professor of journalism at New York University.

As a contributor to The Wall Street Journal and The Guardian, Schellmann writes about holding artificial intelligence (AI) accountable. In her book, The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired, And Why We Need To Fight Back (Hachette), she investigates the rise of AI in the world of work. Drawing on exclusive information from whistleblowers, internal documents and real‑world tests, Schellmann discovers that many of the algorithms making high‑stakes decisions are biased, racist, and do more harm than good.


About Rebekah Tweed
Rebekah Tweed is the Executive Director of All Tech Is Human and a leader in Responsible Technology. Rebekah was named one of the 100 Brilliant Women in AI Ethics in 2023 and frequently speaks on responsible AI, generative AI, AI policy, and responsible technology careers, talent, and hiring trends. She is the guest editor of Springer AI and Ethics Journal topical collection on the social impacts of AI on youth and children and is the Co-Chair of the IEEE Global AI Ethics Initiative Editing Committee and a member of the Arts Committee.

Previously, Rebekah worked as the Assistant Producer of A BETTER TECH, 2021 Public Interest Technology (PIT) Convention & Career Fair, hosted by New York University and funded by New America's PIT-University Network. Prior to that, Rebekah worked as the Project Manager for NYC law firm Eisenberg & Baum, LLP's AI Fairness and Data Privacy Practice Group, where she examined technology's impact on society, organizing and promoting virtual events to build public awareness around algorithmic discrimination and data privacy issues in New York City and beyond.How is AI impacting one of the most consequential areas of our lives: Who gets hired? Who gets promoted? Who gets fired?

Previous
Previous

New Careers in Responsible AI This Week!

Next
Next

Responsible Tech Org Leader Interview: Timnit Gebru, DAIR