This Month in Responsible AI: Renée Cummings Livestream Summary and Highlights

(Thursday April 4, 2024) All Tech Is Human was privileged to produce its third monthly This Month in Responsible, a livestream featuring Renée Cummings (All Tech Is Human Senior Fellow for AI, Data, & Public Policy) in conversation with All Tech Is Human Executive Director Rebekah Tweed, on Wednesday, April 3, 2024.

Cummings and Tweed discuss…

  • The importance of multi-stakeholder collaboration, education and transparency in developing and deploying AI models.

  • New AI policy developments

  • Challenges with AI accuracy and ethical implementation

  • The importance of AI and media literacy

  • How industry and government can balance ethical considerations with innovation

Below, you will find a selection of lightly-edited excerpts from the panel. To view the full livestream, click here.


Is accuracy enough for AI chatbots to strive for?

Chatbots are good with frequently asked questions. Chatbots are good with very generic information. Let's be honest, bad advice is not only the domain of chatbots. Humans are also known for bad advice. Many times we call these 1-800-numbers from Amazon, anything from accessing your refund from an airline to dealing with customer service, if you want to return some clothes we know we've got bad advice from humans as well. But accuracy is critical to responsible AI and accuracy is about accountability. Accuracy is about transparency. Accuracy is also about auditability. If you are auditing chatbots, if you are doing your impact assessments, if you are doing your vulnerability audits, and if you are bringing the requisite level of due diligence to the ways in which you are designing and then deploying these chatbots, then your rate of accuracy is going to be much higher.

If you're not doing the due diligence and you're not embracing A really sophisticated level of duty of care, that understanding of your responsibility to your customers, to the wider society, then you're going to have chatbots that chatbots that are misbehaving and going rogue. So it's all about auditability.

If you want to get the requisite level of accuracy, you've got to invest in a responsible AI strategy. That really dovetails into the big news of last week, which was the memo coming out of the White House. The Office of the Vice President that really looks to streamline Responsible AI operations across the federal government. That's something we need to drill into because that's a big one, because it's really about how we bring an evidence-based approach to doing trust, to doing safety, to doing security, to doing accountability and really thinking with in the realm of responsible AI.

How will generative AI impact the information ecosystem, and how can we address harms?

There's really a lot to think about, and it's not only the information ecosystem, it's about our mental health. When we think about the challenges of cloning, of nonconsensual photographs and images that are being shared. This can have an extraordinarily dangerous impact on our mental health. We have seen young women who have been violated by deepfakes. We've seen young women who are in high school who have been violated by deepfakes and the kind of trauma they have experienced and the post-traumatic stress they continue to experience. We've always got to think of our mental health. We've always got to think of our wellness. Large language models, as great as they can be, when it comes to the ways in which we can access information, curate content, and really make ourselves look smarter and more exciting, right?

That's all the good stuff. Then there is the more challenging stuff, the trauma, the violation. Yes, we can apply those watermarks, and we know that watermarks are good, but we know that fraudsters are even better at fakes, right? We have got to think always of the balance. The challenge with synthetic media in the political realm is that we have got to upskill each other in real time when it comes to AI literacy, when it comes to data literacy, which are critical aspects of media literacy.

In light of the US AI Safety Institute Consortium, what are your thoughts on the AI policy landscape?

It's a very exciting time. The EU AI Act is a brilliant document. It rsets a high standard. What I love about the EU AI Act is the way that risks are categorized. I also love the fact that it's committed to rights and privacy and protection and empowering individuals.

The challenges when it was being negotiated and when being ironed out, large language models and generative AI, were still not part of that discussion. That's the challenge. It does set very good standards. High precedent. What I like about the UK's AI safety approach is that it deals with evidence-based trust, which means that you have got to show the evidence that what you are deploying, we can trust.

What's also brilliant about the UK's approach for me is two things. Contestability. It offers us a space to contest when something goes wrong. It also offers you the kind of redress and compensation that's required. It's heavy in accountability, heavy in transparency, heavy in audibility. It speaks about this evidence-based approach to trust. It also speaks about measurability. Let's measure the things that we're doing. And it's also committed to the ways in which we communicate to people about what the technology can do, and what it cannot do.

In the U.S. we have our blueprint for an AI Bill of Rights. We had our Executive Order. Now we have our federal memo. All of these things are fantastic. None of these things are law. It's a great way to encourage good behavior, which is when you put the responsibility on the people to behave. But sometimes the people need something a little more like the law, right? I hope this doesn't get us lazy. The E.U. AI Act, if anything, lets us know that it's a really high standard that we have got to compete with. Let's match it in the way that we are doing our work.


About Renée Cummings

Professor Renée Cummings, a 2023 VentureBeat AI Innovator Award winner, is All Tech Is Human’s Senior Fellow for AI, Data, and Public Policy.

Cummings is an artificial intelligence (AI), data, and tech ethicist, and the first Data Activist-in-Residence at the University of Virginia’s School of Data Science, where she was named Professor of Practice in Data Science. She also serves as co-director of the Public Interest Technology University Network (PIT-UN) at UVA. She is also a nonresident senior fellow at The Brookings Institution and the inaugural Senior Fellow, AI, Data and Public Policy at All Tech Is Human, a leading think tank. She’s also a distinguished member of the World Economic Forum’s Data Equity Council and the World Economic Forum’s AI Governance Alliance, an advisory council member for the AI & Equality Initiative at Carnegie Council for Ethics in International Affairs, and a member of the Global Academic Network at the Center for AI and Digital Policy. Professor Cummings is also a criminologist, criminal psychologist, therapeutic jurisprudence specialist, and a community scholar at Columbia University.


About All Tech Is Human

All Tech Is Human is a non-profit committed to building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This allows us to tackle wicked tech & society issues while moving at the speed of tech, leverage the collective intelligence of the community, and diversify the traditional tech pipeline. Together, we work to solve tech & society’s thorniest issues.

Previous
Previous

New Careers in Tech Policy This Week!

Next
Next

Deepfakes and Synthetic Media: Summary and Highlights