📺 Livestream: Generative AI: Hype, Harms, and the Responsible Tech Community w/ Dr. Brandeis Marshall and Reid Blackman

ChatGPT. Midjourney. Harmful deepfakes.

Generative Artificial Intelligence has become a mainstream sensation — but how can the Responsible Tech community be prepared to determine hype from harm?

To answer this question, All Tech is Human is curating a monthly livestream series to highlight a range of interdisciplinary experts working on artificial intelligence. From policy experts and software engineers to lawyers and ethicists and beyond, we’re bringing together our mulitstakeholder community to increase AI literacies, promote discussion, and help co-create a brighter future between humans ant technology.

For the second livestream in our series on the hype and harms of Generative AI, All Tech Is Human program director Rebekah Tweed sat down with Dr. Brandeis Marshall and Reid Blackman on Wednesday, May 3rd, to get their perspectives, centering the conversation around what the new areas of potential harm are with now-widely available generative AI, what the throughlines are with some of the same harms that we’ve been struggling to understand and reduce for years, and how we can take action to prevent future harms - whether through legislative measures which are finally inching ahead to a greater extent, how industry can mitigate evolving AI Ethics Risk, and how we can prepare ourselves and fellow citizens with greater AI literacy and data consciousness.

In the interest of context-setting, here is a quick rundown of the recent timeline since our last livestream on April 7th, starting with 60 Minutes featuring Google on Apr 16 to talk about their chatbot Bard. During the story, CEO Sundar Pichai declared that the model somehow taught itself “Bangladeshi”, fueling a new round of AI hype before Meg Mitchell of Hugging Face (formerly Google) fact-checked that Google Palm, Bard’s forerunner, was actually trained on a small percentage (0.006%) of “Bengali”or Bengla, which is what 99% of people in Bangladesh speak.

A few days later, the song “Heart on my Sleeve” appeared on TikTok, with AI-generated vocals that sounded like Drake and The Weekend. Journalists had some questions about its origin, but regardless of who Ghostwriter877 is, it forced the question of how existing copyright laws might protect intellectual property when new synthetic works are generated from foundation models trained on copyrighted work.

On April 19th, the Washington Post published an analysis of the data set used to train many of these earlier Large Language Models; in the following days, companies like Stack overflow and Reddit announced they’ll begin charging companies for access to training data. Then on April 25th, the Republican National Committee (RNC) used GenAI to create a political campaign video depicting a dystopic American hellscape, which they voluntarily disclosed, which raises the question of whether companies and organizations should be required to disclose the use of synthetic imagery, at the very least.

That same day, the Federal Trade Commission (FTC) issued a joint statement with three other U.S. governing bodies declaring that there is no AI exemption to the laws on the books, and the the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition. Also on that day, OpenAI introduced “incognito” mode, with a user opt-out form and the ability to object to personal data being used in ChatGPT. A few days later, Italy subsequently unbanned ChatGPT!

On Apr 26th, Palantir demoed its AI platform running Large Language Models for use in combat. And on Apr 27th, the European Union passed a draft of their AI Act! Finally, on April 28th, “godfather of AI” Geoffrey Hinton left Google, warning about AI potentially becoming smarter than people and posing a threat to humanity - which gave the AGI hype cycle another massive spin.

This becomes our jumping-off point for a wide-ranging conversation about data privacy regulations, industry self-regulation through ethical AI teams, training data and discrimination, chatbots as a trojan horse for corporate risk, the importance of data literacy, avoiding ethical nightmares, and more!


Speaker Bios:

Dr. Brandeis Marshall isn't your typical computer scientist. She loved math and dance as a kid. Brandeis majored in computer science because she found it to be math with a creative twist. Brandeis brings her creativity and passion for data to every project. All things have its roots in data. And Brandeis works with people and organizations to reduce data anxieties, make it digestible to understand and help build responsible data practices.

Brandeis is the author of Data Conscience: Algorithmic Siege on our Humanity, which delivers an incisive and eye-opening discussion of how to fix tech’s dominant philosophy of “move fast and break things” with a renewed focus on equity and oppression. The book explores how to address discrimination in the digital data space with several known algorithms, including social network analysis, linear regression and sentiment analysis.


Reid Blackman, PhD is the author of “Ethical Machines” (Harvard Business Review Press), creator and host of the podcast “Ethical Machines,” and Founder and CEO of Virtue, a digital ethical risk consultancy. He is also an advisor to the Canadian government on their federal AI regulations, was a founding member of EY’s AI Advisory Board, and a Senior Advisor to the Deloitte AI Institute. His work, which includes advising and speaking to organizations including AWS, US Bank, the FBI, NASA, and the World Economic Forum, has been profiled by The Wall Street Journal, the BBC, and Forbes. His written work appears in The Harvard Business Review and The New York Times. Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and UNC-Chapel Hill.


Moderated by:

Rebekah Tweed
is a leader in Responsible Tech and Public Interest Technology careers, talent, and hiring trends. She is the creator of the Responsible Tech Job Board, the Program Director at All Tech is Human, and the Assistant Producer of A BETTER TECH, 2021 Public Interest Technology (PIT) Convention & Career Fair, hosted by New York University and funded by New America's PIT-University Network, where she manages the career fair and senior talent network and curates the job board and career profile gallery. Rebekah is also the Co-Chair of the IEEE Global AI Ethics Initiative Editing Committee and a member of the Arts Committee. Previously, Rebekah worked as the Project Manager for NYC law firm Eisenberg & Baum, LLP's AI Fairness and Data Privacy Practice Group, where she examined technology's impact on society, organizing and promoting virtual events to build public awareness around algorithmic discrimination and data privacy issues in New York City and beyond.

Previous
Previous

📺 Livestream: Building a Career in Trust & Safety - Wednesday, May 24, 2023

Next
Next

☀️ Announcing: May’s Responsible Tech Mixer and Celebration of BAD INPUT with Consumer Reports and Kapor Center!