All Tech Is Human Team

David Ryan Polgar

Founder & President

Rebekah Tweed

Executive Director

Sandra Khalil

Head of Partnerships, and
Trust & Safety Lead

Sherine Kazim

Operations and Strategy Lead

Elisa Fox

Program Manager, Cyber & Democracy Lead

Sarah Welsh

Program Manager, Responsible Tech Careers

Matthew Soeth

Head of Trust & Safety, and Global Policy

Steven Kelts

Director, Responsible Tech University Network

Josh Chapdelaine

Social Media, Production, Communications & Special Projects

All Tech Is Human Fellows

Renée Cummings

Senior Fellow for AI, Data & Public Policy

Sara M. Watson

Siegel Research Fellow

Partners & Recent Collaborations

All Tech Is Human is a project of the Hopewell Fund, a 501(c)(3) public charity. The Hopewell Fund hosts and incubates new and innovative public interest projects and grant-making programs.

All Tech Is Human has had recent collaborations with the following:

  • Australian Embassy in Washington

  • Atlantic Council

  • Consulate General of Canada in New York

  • Consulate General of Finland in New York

  • Consumer Reports

  • Joan Ganz Cooney Center at Sesame Workshop

  • Ofcom

  • Oversight Board

  • Project Liberty

  • New_ Public

  • Tony Blair Institute for Global Change

Our Advisors

Theodora Skeadas

Strategic Advisor,
Technology Policy

Allie Brandenburg

Co-founder & CEO,
TheBridge

Associate Professor,
Graduate School of Social Service at Fordham University

Lauri Goldkind

Apply to join All Tech Is Human’s Slack channel with 8K members across 90 countries!

Our Affiliates

After careful consideration of over 100 applications, we are thrilled to announce the first cohort of All Tech Is Human affiliates.

These affiliates will embark on a journey with All Tech Is Human to continue amplifying the responsible tech movement, advocating for a safer, more equitable, and more inclusive tech future aligned with the public interest. They bring an impressive range of experiences and represent eleven countries, showcasing the importance of diverse, interdisciplinary alignment in responsible tech and All Tech Is Human’s expansive organizational footprint. Check out the cohort’s bios below and stay tuned for more information about their outputs!

Do you have a question about the program or want to reach out to a specific affiliate? Ping Sandra Khalil from our team.

All Tech Is Human is a non-profit committed to building and strengthening the Responsible Tech ecosystem so we can tackle wicked tech & society issues while moving at the speed of tech. Our numerous activities fall under the categories of multistakeholder convening & community-building, multidisciplinary education, and diversifying the traditional tech pipeline with more backgrounds, disciplines, and lived experiences. You can read about our mission here, and see all of our projects here.

  • Alana Ford is the Australian Attorney General's Department's head of government and international engagement across North America for online harms, and broader policy issues related to cybercrime and cross-border data. As a respected expert on interior policy and the intersection with technology, Alana is also the Attorney General’s Department’s representative at the Embassy of Australia in Washington DC, and covering a broad portfolio of criminal justice, law enforcement and national security related issues.

  • Alayna is the Technology Community Lead at the Digitial Freedom Fund. She has experience as a data scientist and machine learning developer, and has worked on responsible technology and AI ethics for years. Most recently, she completed her master's degree from the University of Edinburgh in science and technology studies, where she completed a dissertation with Data & Society on how to implement responsible technology work within industry. She’s interested in responsible technology, research, activism, and generally finding ways to make our current technology work for human flourishing instead of against it.

  • Belle Torek is an attorney and scholar of online speech issues who works at the intersection of free expression, digital safety, and democracy. She currently serves as Associate Director, Technology Policy at the Anti-Defamation League (ADL) Center for Technology and Society, where her subject matter expertise informs ADL’s tech policy strategy around efforts including combating online harassment and abuse; championing platform accountability; and advocating for safe, trustworthy artificial intelligence. In addition to her service as an All Tech Is Human affiliate, Torek also holds affiliate roles at the University of North Carolina Chapel Hill’s Center for Information, Technology, and Public Life (CITAP) and the Cyber Civil Rights Initiative (CCRI), the latter of which she has been involved in various roles since law school.

    Before her tenure at ADL, Torek served as officer to the Media and Democracy program at the John S. and James L. Knight Foundation, where she led coalition-building efforts and supported investments in research around issues of online speech, information integrity, and digital governance. Her expertise has been shared through lectures at distinguished universities, and she has presented scholarship on First Amendment and Section 230 approaches to disinformation at Yale Law School. In recognition of her contributions to the field, Torek was named one of Legacy Magazine’s 2022 40 Under 40 Black Leaders of Today and Tomorrow, and in 2023, she was honored as one of the University of Miami’s 30 Under 30 recipients. She is elated to be joining the All Tech Is Human community.

  • Andy McAdams brings over 15 years of experience as a technology leader and focuses on driving ethical innovation and responsible AI practices. He currently serves as Director of Product Operations at VMware, where he has spearheaded process improvements that reinforce the message that how you do something is just as important what you do.

    Andy has been a contributor and editor for several of ATIH's reports, contributed to VMware's AI Code of Ethics, co-lead the research and proposal of a technology ethics prorgram within VMware, and has earned IAPP certifications in US Privacy Law, EU privacy Law, and Privacy Program Management. He was part of the inaugural class of IAPP's Artificial Intelligence Governance Professional training. He writes a weekly newsletter on Tech and AI ethics called Byte-sized ethics.

    He lives with his husband, two dogs, two cats and entirely too many legos.

  • Anna is a Product Manager focused on building public interest technology. She leads Standard of Care at Beyond the Screen, a non-profit co-founded by Facebook whistleblower Frances Haugen and is excited to help build an ecosystem of accountability for social platforms. Previously, Anna worked in advertising technology and was most recently a Product Manager at Microsoft. Anna attended Barnard College where she majored in urban studies with a concentration in statistics.

  • Arushi serves as the Head of Trust & Safety at DynamoFL, a Series A Generative AI privacy startup and a Visiting Fellow at the Integrity Institute. Previously, she led Product Marketing & GTM for Twitter's Trust & Safety team, where she focused on developing and launching Information Integrity and Elections-related features and policies. Prior to Twitter, she was an Assembly Research Fellow at Harvard University's Berkman Klein Center for Internet & Society studying misinformation and media literacy in India, and a Privacy Product Management Consultant for Consumer Reports Innovation Lab. Before moving to the responsible technology space, Arushi held various business strategy and finance roles at LinkedIn, Formation, and Lazard. She graduated with a Masters in Design Engineering from Harvard University and a B.S. in Business Administration from UC Berkeley, and currently lives in Los Angeles.

  • Ava is the director of advocacy and operations at the Young People's Alliance, a youth-led non-profit focused on leveraging youth perspectives into conversation about policy that affects them. Before graduating she worked with YPA in the North Carolina state legislature to introduce NC HR 644, a youth centered data privacy bill and was also a contributor on research projects relating to the 4th amendment right to privacy in the digital age. After graduating from Stevens Institute of Technology in spring 2023 she immediately began working full time at the Young People's Alliance where she is currently focused on building consensus among young people on their legislative priorities and advocating at the state and federal levels for social media reform. Ava is from Nashville, Tennessee and is currently based in Washington, D.C.

  • As a Data Scientist at Vera, Ayodele leverages hands-on experience developing ML and a passion for AI ethics to help clients evaluate AI systems for compliance and disparate outcomes.

    She has also developed responsible AI curricula for the Flatiron School and Microsoft, educating data newbies and engineers alike. Ayodele is also a LinkedIn Learning Instructor with courses on Machine Learning and Ethical AI.

    Ayodele is an AI realist with a mission to educate people on how to identify and mitigate harmful AI to improve the lives of marginalized people.

  • Robert “Bobby” Zipp (he/they) is a civic technologist and algorithmic accountability enthusiast currently serving as a Technical Product Specialist in the Data Analytics & Research Unit at the Manhattan District Attorney's Office. Bobby has prior experience in the United Nations Population Fund’s Policy & Strategy Division and in the education & nonprofit spaces. His first connection to All Tech is Human was serving as a Responsible Tech University Ambassador for Northeastern University from 2021-2022. He has contributed to multiple ATIH working group reports and has participated in ATIH's mentorship program as a mentee.

    Bobby received a BA from Swarthmore College in English Literature, Political Science & Educational Studies with High Honors in 2018 and an MS in Computer Science with a concentration in AI & ML from Northeastern University in 2022. Raised in Dover, Delaware, he now lives in Brooklyn and spends his free time playing kickball and trying to keep up with every international season of RuPaul's Drag Race.

  • As a globally recognized policy advisor, program strategist and thought leader, Breanne establishes company and organization-wide gender, child and youth safeguarding, safety and mental health protocols, international standards, and processes to improve the lives of millions of citizens around the world.

  • Catherine Feldman is the Lead for Human Centered Technology and Sr. Research Strategist at the Digital Data Design Institute at Harvard. She previously cofounded and led a multinational research and development team at Microsoft to launch products designed to facilitate digital social wellbeing. Her research at the MIT Media Lab and as an NSF research fellow for the GLIDE Lab at Drexel University contributed to pro-social technologies used by millions globally. Catherine holds multiple patents for technical user interface elements and network protocols and has publications and presentations notably featured in the AERA, APA Technology Mind and Behavior, and International Conference on Computational Social Science. Catherine's work connecting sociotechnical research and product strategy has made her a sought-after strategist for organizations looking to increase their innovation velocity with humanity at the center.

  • As the global lead for digital ethics at Avanade, Chris is responsible for creating digital ethics fluency and change internally as well as for advising clients on their digital ethics journey. This includes leading training exercises and workshops, conducting digital ethics assessments, and guiding digital ethics program design. He is also proud to be one of Avanade's Citizenship Champions.

    After starting his career in tech marketing and PR, Chris led Forrester’s coverage of governance, risk, and compliance (GRC) for 12 years, helping shape that market’s direction and growth. He guided corporate executives in their efforts to improve their corporate responsibility, enterprise risk management, corporate compliance, third party risk management, information security, and privacy programs. Chris was also a trusted advisor to scores of product and service provider executives, and produced and contributed to hundreds of research reports, webinars, conference speeches, and press interviews.

  • Constance Bommelaer de Leusse has more than 20 years of experience in digital policy, technology, research and education. She currently serves as Project Liberty’s Institute Executive Director. Affiliated to Georgetown, Stanford and Sciences Po universities, this international institute is advancing timely, actionable research on ethical technology and serves as a meeting ground for technologists, social scientists, policymakers and leaders from the public and private sectors. The institute’s mission is to ensure that digital governance is prioritized in the development of new technology and embedded in the next generation of the web.

    She is also a member of the Scientific Committee of the Digital Governance and Sovereignty Chair at Sciences Po, and teaches digital governance on a part-time basis.

    Constance started her career working for the French prime minister’s services (2003-2006) on information society issues. She then joined The Internet Society (2006-2022), the international NGO founded by Vint Cerf, the father of the Internet. In her role of Vice President of institutional relations and empowerment, she led the organization’s international partnerships and policy work across stakeholder communities. She also conducted training and learning activities, empowering the next generation of tech leaders to build an internet that creates opportunity and supports the public interest.

    Constance has been instrumental in developing new internet governance institutions. She notably founded the Internet Technical Advisory Committee to the Organisation (2006) for Economic Co-operation and Development (OECD), facilitating the participation of global technical and academic communities into international policy discussions. In 2013, she was seconded to UNESCO to help develop their internet governance strategy. She co-authored the UN Internet Universality concept, i.e. Rights-based, Open, Accessible and Multistakeholder. The latter has, since then, been used as a foundation to support the evolution of digital policy frameworks at the national and regional levels.

    Constance has served on a number of committees including the World Economic Forum Internet For All Steering Committee, and the UN Secretary-General’s Multistakeholder Advisory Group of the Internet Governance Forum (IGF).

    Constance holds a master’s degree in law from the Paris Panthéon Asass University, a post-graduate degree in EU politics from Sciences Po, and a diploma in management from the London School of Economics (LSE). She grew up in the United States and has lived in Switzerland. She is currently based in Paris with her husband and their three children.

  • Daniella is a passionate entrepreneur and AI innovator dedicated to developing products that promote human productivity and empower individuals. With her impressive background in law, software development, business, and communication, she brings a unique perspective to the world of technology.

    Daniella's deep interest in AI has led her to co-found Copianto AI, an AI startup that provides businesses with the tools to build their own conversational search experiences leveraging cutting-edge generative AI and LLM models.

    She firmly believes in the potential of technology to promote social justice and economic equality. This commitment to responsible tech is evident in her work at Copianto AI, where she focuses on developing AI solutions that are inclusive, ethical, and beneficial to society as a whole.

  • Danielle Sutton is a Responsible AI Strategist who is passionate about creating a more equitable tech future. She is a 5th generation Harlemite currently pursuing her Masters in Technology Policy at University of Cambridge. Prior to her graduate studies, Danielle worked in tech strategy consulting at Deloitte where she focused on the intersection of responsible AI and criminal justice. During her tenure at the firm, she helped spearhead Deloitte' first annual Technology Trust Ethics report, which synthesized inputs from ~2,000 global leaders to identify ethical standards in emerging tech and approaches to operationalization. Danielle has been an avid supporter of All Tech is Human for the past few years through her contributions as a panelist speaker, thought leader, and mentorship program participant. She is excited to help expand ATIH's reach across the globe!

  • Didem cares about the future of the world and nature. She is a computer scientist with a Ph.D. in mechatronics, which can give you an idea about how much she loves to talk about the future and emerging technologies. She is a data person, always finds a way to talk about how important it is to know your data, use it to make decisions and at some point expect her to talk about art, visualizations and visual analytics. Didem is a person who does not hesitate to talk about inequalities and point out her ethical concerns. She dreams of a better world and actively works on improving inequalities regardless of their nature. She is an analytical thinker with a passion for design thinking, a researcher with a future perspective, an engineer who likes problems more than solutions and a teacher who likes to play during lectures. She is a good reader, sailor, divemaster, photographer and drone pilot.

  • Ece is a tech professional with over 12 years of experience in online safety, dedicated to striking the right balance between fostering innovation and upholding responsible tech practices while creating safer digital environments. In her career, Ece worked in major social and entertainment platforms (Meta, TikTok) in a range of roles including policy development, leadership/ people management, and program management. She managed teams developing policies and processes for areas including Violent Extremism, Fraud, Regulated Goods, and Privacy. Ece is an active member of the Trust and Safety Professional Association (TSPA), All Tech is Human mentorship and Responsible Tech guide working group, and a member of Marketplace Risk advisory board. Academically, Ece is a Fulbright scholar with a Master's degree in Political Science.

  • Fabienne Tarrant is based in London, holds a bachelor’s degree in International Relations from Brown University, and has a background in technology policy, Trust & Safety, regulatory products, and countering violent extremism online.

    During her time as Senior Policy Analyst at Tech Against Terrorism, Fabienne mentored tech companies on their policies and content moderation processes, facilitated their alignment with global counter-terrorism standards, and played a key role in launching the Knowledge Sharing Platform, a resource that supports Trust & Safety teams and promotes platform transparency. Currently, she works at Airbnb on the Policy Products team, specialising in regulatory product implementation to support government policies in Europe and Asia-Pacific.

    Fabienne’s practical experience also includes internships at organisations like the Institute for Strategic Dialogue (ISD), the Global Center on Cooperative Security, and WITNESS, where she worked on issues related to online extremism and human rights advocacy. At Brown University, Fabienne was also a teaching assistant in Cybersecurity and International Relations and a Research Assistant at the Watson Institute for International and Public Affairs.

    Fabienne has participated in various industry events, webinars, and podcasts. For example, she has presented at the Terrorism and Social Media conference at Swansea University and coordinated and moderated Tech Against Terrorism's e-Learning webinar series, co-hosted with the Global Internet Forum to Counter Terrorism, covering topics like ‘Supporting Platforms’ Content Moderation and Transparency Efforts: Existing Resources and Tools’, 'The State of Global Online Regulation’, and 'The Nexus Between Violent Extremism and Conspiracy Theory Networks Online.'

  • Faisal Lalani is a global researcher, activist, writer, and technologist with a passion for grassroots community organizing in a variety of disciplines, including AI & democracy, public health, education reform, clinical psychology, and social change at large.

    He's worked on rural curriculum development in Nepal, community wireless networks in South Africa, political influence on social media with Microsoft Research in India, machine learning for community health programs around the world for Dimagi, and AI consulting for big corporate clients in the US with Snowflake. After recently graduating from the University of Oxford's Media Policy Summer Institute, Faisal is now in Sri Lanka collaborating with civil society on South Asian tech policy and international relations.

  • As the founder of Future Future, Heidi is at the forefront of supporting innovative ideas with actionable change. Her experience extends from guiding founders through the early stages to advising larger organizations on Responsible Adaptation to steering her startups like Alpha Drive to new heights in benchmarking AI. Joining the All Tech Is Human affiliate program is a natural step in Heidi's commitment to shaping technology that's ethical, responsible, and, most importantly, human-centric. She is excited to advance technology with intention and integrity, together.

  • Ian Eisenberg is Head of AI Governance Research at Credo AI, where he advances best practices in AI governance and develops AI-based governance tools. He is also the founder of the AI Salon, an organization bringing together cross sections of society for conversations on the meaning and impact of AI.

    Ian believes safe AI requires systems-level approaches to make AI as effective and beneficial as possible. These approaches are necessarily multidisciplinary and draw on technical, social and regulatory advancements. His interest in AI started as a cognitive neuroscientist at Stanford, which developed into a focus on the sociotechnical challenges of AI technologies and reducing AI risk. Ian has been a researcher at Stanford, the NIH, Columbia and Brown University. He received his PhD from Stanford University, and BS from Brown University.

  • Jason Steinhauer is a bestselling author and public historian who operates at the intersection of history, tech, social media and politics. Technology has profoundly re-organized our ways of knowing things about the world, and the positive and negative effects of technology must be balanced with a humanistic perspective. Jason brings this approach to his work, where he writes and speaks about how social media, tech and the web are shaping our history, politics and future. He is the bestselling author of "History, Disrupted: How Social Media & the World Wide Web Have Changed the Past"; a Global Fellow at The Woodrow Wilson Center and a Senior Fellow at the Foreign Policy Research Institute; an adjunct professor at the Maxwell School for Citizenship & Public Affairs; a contributor to TIME, CNN and DEVEX; a past editorial board member of The Washington Post "Made By History" section; and a Presidential Counselor of the National WWII Museum. He worked for seven years at the U.S. Library of Congress. He is the founder and CEO of the History Communication Institute, and has traveled overseas with the U.S. Department of State four times as part of diplomatic exchanges between the United States and the European Union, meeting with government officials, scholars and students to discuss the effects of the Web and social media on public understandings of news, history and information. He has spoken at events across the United States and Europe and appears frequently in the media.

  • Jose is an Artificial Intelligence (AI) ethicist, strategist, and humanist. Currently, a lecturer at The University of Sydney, researcher, mentor at All Tech is Human and contributor to Women in AI, IEEE teams and AAIAC.

    Jose completed a Master's in IT, major in Business Information Systems and Data Analytics at the University of Technology Sydney. He also holds a ForHumanity Certified Auditor accreditation in AI, Algorithmic and Autonomous systems.

    Over the past four years, Jose’s academic and professional pursuits have brought him from Latin America to Australia, researching ethical frameworks, their implementation and complying with current laws.

    Jose is passionate about advancing responsible AI practices and collaborating with organizations to create a positive impact. By combining his academic knowledge, practical experience, and dedication to ethical AI, he aims to contribute to developing AI systems that benefit humanity while prioritizing ethical considerations.

  • Julie J Lee, PhD, is the Technology for Liberty Fellow at the ACLU of Massachusetts, where her work contributes to ongoing advocacy and legislative efforts at the ACLU of Massachusetts in areas such as privacy, surveillance, and technology in the public interest. She is also a 2023-2024 Public Voices Fellow on Technology in the Public Interest. Prior to joining the ACLU of Massachusetts, she was a postdoctoral researcher in computational cognitive science at New York University and a research intern with the Surveillance Technology Oversight Project. She has written on topics including the role of AI in mental health and the use of civil asset forfeiture to fund surveillance technologies.

  • Karolle Rabarison is a program director, digital strategist, and community builder who has spent her career amplifying the reach and social impact of technology and nonprofit organizations. Her work has influenced communities ranging from local news entrepreneurs in the U.S. to technologists in Ghana to policy leaders in the halls of the United Nations. She currently leads communications for the Online News Association, working to connect journalists and newsroom leaders with the resources they need to better serve their audiences.

  • Katleho is a PhD candidate at the University of Pretoria, Faculty of Theology and Religion, in South Africa. His PhD research investigates the ethical implications of emerging technologies such as AI within the African context, from the perspective of Ubuntu ethics. He has experience as a teacher assistant at the University of Pretoria and as a lecturer at the University of South Africa. In addition to presenting academic papers locally and internationally, he has done research for private companies and NGOs. He has also contributed to various projects on AI and digital policy, AI ethics, as well as Responsible AI for various organisations.

  • Lama Mohammed (she/her) is a public affairs and communications specialists. In her 9-5 she works on policy and communications within artificial intelligence, cybersecurity, and privacy.

    Before joining the the communications field, Lama worked in cybersecurity policy experience with D.C. government relations firms and the United Nations.

    Lama is also an active member of the technology policy and socially responsible technology space, as she has contributed to university research on the intersections of policy, law, and technology, published podcasts on creating an inclusive Internet, and won second place in the Internet Law & Policy Foundry’s first Policy Hackathon advocating for equitable solutions to bring broadband access to the incarcerated community.

    In addition to her role full-time role, Lama is a volunteer at All Tech Is Human, where she has co-authored reports on AI and human rights, building a better tech future, and technology and democracy. Lama is also a Fellow and the New York Regional Chair at the Internet Law & Policy Foundry, where she manages the organization's social media accounts, designs the graphics, and is a frequent host and moderator on Foundry podcast episodes and webinars.

    As an emerging voice in the technology policy community, Lama has spoken at a variety of conferences and panels, including the 2023 IAPP Global Privacy Summit, where she highlighted the privacy issues unique to the youth generation. Lama has also facilitated a security training on AI and currently coaches students about starting a career in responsible technology and cybersecurity.

    Lama graduated with Latin Honors from the American University’s Honors Program and School of Public Affairs with a Bachelor's degree in Communications, Legal Institutions, Economics, and Government (CLEG) and minors in Computer Science and Information Systems and Technology in the spring of 2021.

  • Lindsey is a social impact leader working at the intersection of responsible tech, policy, and civic engagement. As the founder of Know Your Local, she leads a call to action for citizens to engage with their local government. Lindsey is also the Chief Marketing Officer at a civic tech startup that’s reframing civic engagement and advancing democracy through technology. She works across all sectors to advocate for and advise on, the development and implementation of Responsible AI.

  • Martin Cocker is the founder and CEO of the Online Safety Exchange (OSX) charitable company and host of the Radio SOSO (The Science of Safety Online) podcast. The OSX is an international partnership headquartered in New Zealand that is dedicated to the design and development of products and services to support online safety practitioners.

    He was the CEO of Netsafe New Zealand from 2006 to 2021. During this time, he oversaw the establishment of a national assistance line as part of the Harmful Digital Communications Act civil process in 2016, a national cybercrime reporting hub in 2010, and multiple award-winning public education campaigns.

  • Nadah Feteih is an Employee Fellow at the Institute for Rebooting Social Media. She holds B.S and M.S degrees from UC San Diego in Computer Science with a focus on systems and security. Her background is in privacy and trust & safety, working most recently as a Software Engineer at Meta on Messenger Privacy and Instagram Privacy teams. She was promoted to Senior Software Engineer within two years and was involved in various integrity workstreams at the company, escalating content moderation issues and bringing awareness to bias in product features and enforcement systems. She is passionate about building and supporting communities as the founder of Muslim Women in Tech and through her prior work at Meta in consulting and building features for the Faith team. She is currently a Tech Policy Fellow with the Goldman School of Public Policy at UC Berkeley and is driven to use her technical expertise and expand her knowledge in ethics, policy, and integrity through her work at RSM. In her free time, you’ll find Nadah outdoors: hiking, running, or climbing mountains.

  • Nakshathra Suresh is a cyber safety researcher and speaker who has presented at various conferences and fora, both in Australia and overseas. Holding postgraduate qualifications in criminology and criminal justice, she has received awards for her unique storytelling and innovative presentations. Nakshathra is an avid mental health advocate and feminist, and has always been passionate about cyber safety, drawing on her lived experience as well as her research. Driven to create better and safer digital environments through conversations around responsible technology, Nakshathra adopts an ethical, intersectional viewpoint to her work and activism. She has previously held mid-level senior roles in outreach, analysis and law enforcement in the Australian Government.

  • Numa Dhamani is an engineer and researcher working at the intersection of technology and society. She is a natural language processing expert with domain expertise in influence operations, security, and privacy. She has developed machine learning systems for Fortune 500 companies and social media platforms, as well as for start-ups and nonprofits. Numa has advised companies and organizations, served as the Principal Investigator on the United States Department of Defense’s research programs, and contributed to multiple international peer-reviewed journals. She is also deeply engaged in technology policy, supporting think tanks and nonprofits with data and AI governance efforts.

    Numa is the co-author of Introduction to Generative AI, to be published by Manning Publications. Her work on combating online disinformation has been featured in several news media outlets, including the New York Times and the Washington Post. Numa is passionate about working towards a healthier online ecosystem, building responsible artificial intelligence, and advocating for transparency and accountability in technology. She holds degrees in Physics and Chemistry from the University of Texas at Austin.

  • Paola Maggiorotto currently works at Teleperformance as Senior T&S Process Director. As Americas Regional T&S Lead she is responsible for making Teleperformance T&S Centre of Excellence grow across its entire ecosystem.

    Paola is currently based out of Medellin, Colombia. Originally from Italy, she spent 12 years in Dublin, Ireland.

    Her last job there was at Meta as T&S Global Process Manager (specialized in violent, graphic and highly egregious content), leading a team of Project and Program Managers, scaling solutions helping prevent online and real-world harm and reducing risk by building scalable support systems and optimizing processes.

    Prior to that, Paola worked at Microsoft as EMEA Technical Support Manager and Search Editorial Specialist ensuring a high-quality and safe user experience on Microsoft Advertising and partner sites.

    Paola has 11 years of experience in T&S, through people, project and program management, stakeholder engagement, operations and communication.

    Paola is also super passionate about DEI and human rights and has been involved in leading multiple DEI and ERG global initiatives, focusing on guaranteeing a gender and culturally balanced recruiting pipeline/workforce and advocating for more inclusion of women and underrepresented categories in technology and the corporate world.

    Outside of the corporate world Paola volunteered in different NGOs and served for 5 years as Director in the Board of AkiDwA, an Irish charity supporting migrant women in Ireland.

  • Paula has extensive experience in building and maintaining trust and safety in online platform environments and communities. She spent the last 12 years holding global roles across the tech industry in companies such as Google, Meta and Reddit leading on multiple areas of policy development and enforcement. Her areas of expertise and interest range from platform integrity and governance to child safety, consumer protection and advocacy, content moderation and digital regulatory trends.

    With a background in linguistics, translation and localisation, she is passionate about the global role of technology and innovation for social good and a fairer digital development. She advises tech companies and professionals on product safety and integrity issues and how to navigate the current global landscape while evolving their product/service, policies and practices responsibly and safely.

    She is a mentor at All Tech Is Human, an active member of the Trust & Safety Professional Association and the Integrity Institute and is also academically involved in Artificial Intelligence policy research initiatives such as the Generative AI Working Group led by Harvard's Digital, Data, and Design (D^3) Institute. She is currently pursuing a Master’s programme in European Law and also serves as volunteer director of the board of a educational non-profit organisation in Ireland - the Aisling Project - that aims to provide a safer aftercare service for children in disadvantaged areas.

  • Pearle is a distinguished Nigerian lawyer with five years of experience in the field of tech policy and public affairs. A proud alumna of the University of Lagos and UC Berkeley, she has successfully represented renowned companies such as the Oversight Board, TikTok and Twitter . Her career has been marked by collaborative efforts with policymakers and private sector partners, addressing critical matters pertaining to tech-related public policy. With a remarkable talent for management, Pearle effectively led a team of over 20 members, offering invaluable policy guidance and facilitating seamless communication within diverse markets. Her exceptional leadership has been recognized with two prestigious company awards.

    Beyond her legal expertise, Pearle is a dedicated and passionate speaker committed to demystifying the intricacies of tech policy and various issues within the tech industry. She is deeply committed to guiding young professionals, particularly those in the legal field, in finding alternative career pathways in tech. Through her advocacy, Pearle aims to empower and assist aspiring professionals in seizing career opportunities, networking, and showcasing innovative ideas within Africa and beyond.

  • Rebecca Thein is an accomplished professional, specializing in product, program, and people management at the confluence of society, responsible product development, and accessible design. Her most recent role as Senior Technical Program Manager at Twitter was pivotal, where she played a central role in leading the global expansion of civic and crisis response initiatives. Her expertise guided significant projects, such as overseeing operations during the Brazil & US midterm elections, addressing COVID-related misinformation, and managing crisis situations, notably the conflict in Ukraine. Additionally, she is recognized as a Fellow through the Integrity Institute, where she continues to provide her expertise in the field of responsible technology and brings extensive experience from her earlier role as a member of the Tech and Democracy working group through All Tech is Human, contributing information instrumental in crafting their most recent Responsible Tech Guide. Rebecca's dedication extends to her role as a Digital Sherlock through The Atlantic Council's Digital Forensics Research Lab, where she demonstrates her commitment to digital safety and cybersecurity.

    Rebecca's commitment to responsible tech and civic integrity extends to her participation in various conferences and speaking engagements. Notably, she has been an invited panelist at DEFCON's Voting Village, addressing the 2024 Election Threat Landscape. She has shared her expertise on topics such as "Building Competencies for Civic Integrity Professionals" at the RightsCon International Human Rights Conference in Costa Rica. Recently she has been a guest speaker at Georgetown University on two occasions, presenting insightful discussions on Digital Disruptions to Democracy and the specific impact of AI on the 2024 elections. Her contributions and presentations underscore her influence in the domains of responsible tech and technology and democracy. Rebecca's influential podcast appearances, articles, and co-authored materials further establish her as a prominent figure in the responsible tech and technology and democracy domains.

  • Sabhanaz Rashid Diya is the founding board director at Tech Global Institute, a global policy lab with a mission to reduce equity gaps between technologies and the Global Majority. A computational social scientist by training, Diya has over two decades of experience working at the intersection of tech policy, human rights and democracy. Previously, she was the head of public policy at Meta, where she led teams responsible for government relations, online safety and privacy in the region. Her career spans private and public institutions in the U.S., Asia and Africa, including at Bill and Melinda Gates Foundation, USAID and the World Bank, on digital economy, governance, artificial intelligence and trade. She is a founding board director for U.S. Bangladesh Business Council at the U.S. Chamber of Commerce. She holds a Master's degree in public policy from the University of California, Berkeley, and serves on the board of numerous gender justice and civil liberties organizations.

  • Dr. Savannah Thais is an Associate Research Scientist and adjunct professor in the Data Science Institute at Columbia University. She runs a research group focused on responsible, robust, and trustworthy machine learning (ML) and AI. She approaches this work from both a scientific lens, informed by her background in physics and focusing on understanding how, when, and why ML/AI models work, and from a participatory democratic lens, supporting technical knowledge development and activism in communities and advocating for effective and just regulation of ML/AI. In addition to her research, Dr. Thais also developed and teaches a new AI Ethics course for the Data Science Masters program.

    Dr. Thais currently serves on the Executive Board of Women in Machine Learning and the Executive Committee of the APS Group on Data Science, and is a Founding Editor of the Springer AI Ethics journal. Outside of her academic and professional work, she is active in local organizing, politics, and mutual aid in Brooklyn, particularly her local neighborhood of Bed-Stuy.

  • Seiyoon, a tech professional based in Seoul, South Korea, has valuable experience in both tech policy and business. She has worked with TikTok's Public Policy team, the entertainment strategy team at NCSOFT (a major South Korean game publisher), and currently manages the global advertising business at LINE-Yahoo Corporation. With a BA in Political Science from Seoul National University and a strong background in research and critical thinking, she offers a unique perspective on technology, combining regional insights with a global viewpoint. She is eager to share her expertise on the East Asian digital landscape, ad-tech, AI, and policy developments.

  • Swapneel Mehta is a Postdoctoral Associate and Founder of SimPPL.

    At Boston University and MIT, Swapneel researches platform governance and free speech. He holds a Ph.D. from NYU's Center for Data Science, specializing in machine learning, causal inference, and their applications in social media and politics at CSMAP. Since 2021, he founded and led SimPPL, a research collective focused on creating civic integrity tools for media development organizations and journalists. SimPPL has won awards, fellowships, and grants from Google, Mozilla, Amazon, the Wikimedia Foundation, the Goethe Institute, the Anti-Defamation League, the NYC Media Lab, and others. Swapneel is passionate about empowering researchers from the global south to participate in mitigating online harms and building responsible AI tools for global audiences. He has previously worked on machine learning products and research at Slack, Adobe, Twitter, Oxford, CERN, and various startups catering to Fortune 50 clients in the domains of artificial intelligence and cybersecurity.

  • A lawyer and an international relations professional by trade, Talita has worked on tech policy, human rights and sustainable development across the public, private, and multilateral sectors. Some of Talita’s past work experiences include the United Nations, the World Bank, and São Paulo’s State Public Defense. Most recently, she worked as the Program Director of the PEN/Barbey Freedom to Write Center, where she lead the Combating Online Cultural Repression project and works on the use of surveillance tools against free speech across the globe.

    Talita holds a B.A. in Law from the University of São Paulo and a Masters of Science in Global Politics and Security from Georgetown University’s School of Foreign Service. She is fluent in Portuguese, French, Spanish, and has lived and worked in São Paulo, Buenos Aires, Washington DC, New York, and London.

  • Theodora (Theo) Skeadas is the CEO of Tech Policy Consulting, where she has consulted with organizations including Partnership on AI around responsible AI, Carnegie Endowment for International Peace's Partnership for Countering Influence Operations on government efforts to combat disinformation in Ukraine, National Democratic Institute on online gender-based violence and women’s political participation, and the Committee to Protect Journalists on a journalist safety tool. She also works part-time as the Executive Director of Cambridge Local First, a community nonprofit that addresses issues at the intersection of technology policy and small business.

    Previous at Twitter, she managed the Trust and Safety Council, a research hub within the Public Policy team, and a trusted flaggers program for human rights defenders, and she supported Twitter's global civic integrity, transparency, and crisis response efforts. Before that, she spent five years working in national security at Booz Allen Hamilton, examining public sentiment, social movements, and disinformation using social media for the U.S. Federal Government. Previously, she worked with nonprofits in Morocco (Search for Common Ground, Innovations for Poverty Action, Sidi Moumen Cultural Center, and Sister Cities International - Africa), Turkey (Fulbright), Palestine (Inspire Dreams), Greece (Center for Hellenic Studies), and Costa Rica (World Teach).

    She has an MPP from the Harvard Kennedy School and a BA from Harvard College. She has language experience in French, Modern Greek, Modern Standard Arabic, Modern Turkish, Moroccan Arabic, and Spanish.

  • Zia Mohammad is a technologist dedicated to shaping the future of emerging technology policy. With a diverse background and extensive global experiences, he has been immersed in the fields of venture capital, artificial intelligence, and quantum computing.

    Currently based in New York City, Zia serves as a Senior Product Manager at Amazon Web Services working on Amazon's quantum computing service. Additionally, he is a Fellow at the Internet Law and Policy Foundry. Academically, Zia holds degrees in Electrical Engineering and Computational Neuroscience from The Ohio State University.

    Beyond his professional commitments, he can be found tending to a community garden, volunteering to end food insecurity, or on a quest to discover the city's best desserts.

Help strengthen the Responsible Tech movement and elevate new and diverse voices.