This Month in Responsible AI: Hallucinations, Privacy Concerns, and New Policy

(Tuesday, June 18, 2024, New York, NY) All Tech Is Human presented its fifth This Month in Responsible AI livestream featuring Renée Cummings (Senior Fellow for AI, Data, & Public Policy, All Tech Is Human) in conversation with Rebekah Tweed (Executive Director, All Tech Is Human) on June 13, 2024. Cummings and Tweed discussed the latest news, policy, and user safety topics related to the ongoing development of generative AI.

To begin the conversation, Cummings recalled her recent trip to the World Economic Forum’s Global Technology Retreat in May 2024 that brought together AI and data leaders across a wide variety of sectors from around the world. To Cummings, the Technology Retreat was evidence that conversations about developing and deploying Responsible AI are at the forefront of global conversations. Cummings highlighted the most frequently asked questions including…

  • How can AI developers build responsible, sustainable, and resilient systems?

  • What are the ways we can consider global AI governance?

  • How can AI developers maximize rewards and minimize risks?

  • How can we think about AI’s impacts on society?

  • What are the most effective ways to engage the public in the ways in which AI developers design and deploy new systems?

These questions about the development and deployment of AI and data are at the center of what Cummings calls, “...the greatest geopolitical game changer ever in history.”


Overcoming AI Hallucinations

Earlier this month, Google’s AI Overviews’ hallucinations included listing glue as an ingredient in a pizza recipe and recommended people eat rocks. Tweed inquired about why generative AI accuracy is so difficult to solve.

“We’re putting things together and we still don't fully understand how these things are combining and, of course, the scale at which the combination of these idiosyncratic approaches are deployed.” Cummings said. “So it’s going to be one company today. It’s going to be another company tomorrow. This is why ethical AI is so important. This is why Responsible AI is so important. This is why accountability and transparency and accuracy and audibility are so important. And this is why public oversight is also very important because these are emerging systems.”


New AI Products Lead to Privacy Concerns

Leading tech companies recently rolled out new AI products that drew major biometric and data privacy concerns. Apple last week announced it would use private AI data to develop personalized cloud computing services. Similarly, Microsoft announced, then paused, Recall, which captures constant screenshots of your device in order to help people discover anything they’ve viewed on their machine. Tweed asked Cummings to explain the biometric and data privacy concerns associated with Apple and Microsoft’s new products.

“Privacy is dead. In the world of data and the world of AI, there is no company, no organization, no agency, no country, no individual, no friend that can offer you 100 percent privacy.” Cummings said. “A certain measure of privacy is required and should be legally required, the ways in which our data is captured, the ways in which it’s shared, the ways in which it’s collected, the ways in which it is used, the ways in which it is disposed, even data in the afterlife. These are all critical questions around privacy, but no company can really sell you 100 percent privacy. We’ve seen it. There have been leaks. There have been breaches. There have been bad actors. There have been mistakes. There have been things that people just were not thinking about that they overlooked.” 


Can AI Policy Focus on Ethical and Inclusive Innovation?

Tweed asked Cummings about California proposed SB 1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which proposes to regulate new AI models.

“The question is whether or not we can do ethical innovation or inclusive innovation.” Cummings said.. “You have companies saying that, wait a minute, this bill is too draconian. Businesses are not going to be able to comply. Businesses are not going to be able to deal with those fines. It's going to cost too much, too much [stress] on the legal spectrum for businesses to really compete. The persons who are against it are saying what you are going to see is businesses leaving tech companies, leaving California in droves and going to other states where they can create all the loopholes that they want.” She continued.

The fear around SB 1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is it will hurt industry and innovation in a nascent industry. Cummings pushed back against those criticisms. “We can do ethical innovation. We can do inclusive innovation. We could do safety and we can do ethics together. But I think there's a lot of fear. And I think the fear around the bill is that somehow it's going to stymie innovation and creativity and the ability to really manipulate new and emerging technologies in ways that are not hampered by too much legal constraints and legal ramifications.” Cummings said.


About All Tech Is Human

All Tech Is Human is a non-profit committed to building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This allows us to tackle wicked tech & society issues while moving at the speed of tech, leverage the collective intelligence of the community, and diversify the traditional tech pipeline. Together, we work to solve tech & society’s thorniest issues.

Previous
Previous

Leading Responsible Tech Organization All Tech Is Human Appoints Alexis Crews as its Senior Fellow, Information Integrity

Next
Next

New Careers in Responsible Tech This Week: Responsible AI, Trust & Safety, Product Management & More