Responsible Tech DC: A Better Future for Youth—download our report below!

All Tech Is Human was proud to host Responsible Tech DC: A Better Tech Future For Youth, a landmark gathering of Trust & Safety professionals, researchers, civil society orgs, youth advocates, youth leaders, and other groups on Wednesday, February 28, at Union Stage in Washington, DC.

Panel #1: Designing for Youth Around Autonomy, Inclusion, and Safety

All Tech Is Human Head of Partnerships & Trust and Safety Lead Sandra Khalil hosts Adele Ann Taylor (Youth Digital Programs and Partnerships Senior Manager at Thorn), Dr. Siva Mathiyazhagan (Research Assistant Professor and Associate Director of Strategies and Impact at the SAFE Lab, University of Pennsylvania and faculty associate at the Berkman Klein Center for Internet & Society, Harvard University), Victoria McCullough (Leads TikTok's Trust and Safety Outreach and Partnerships team for the Americas), and Ava Smithing (Director of Advocacy and Operations at the Young People’s Alliance) in a panel conversation about youth empowerment through autonomy, inclusion, and safety.

Highlights

How does platform design influence youth autonomy and rights?

“These young people are constantly under pressure to show they're having a good time, that they're enjoying their life and that their life is worth seeing on social media. And they're not even just comparing that life with their friends and their peers but, we know from the Meta lawsuit, teenage girls on Instagram see five times as much content from popular users than they do from their friends and their peers. So they're not just having to post about their life and then compare it to their peers, but they also are posting about their lives and comparing it to people who get paid to monetize their content and get paid to make their lives look good. So I think when we're looking at privacy and autonomy, we have to think about how we can integrate those things into the core functions of a platform and allow people to be independent of social media and live an offline life as well.” - Ava Smithing

“The design elements of the online platforms need to be really shifted. And if that is not shifting, then it's causing more real-world harm and biases in terms of having conversation with the people, as well as producing more bias and hate and violent issues. There’s the privacy and autonomy side of it, the digital literacy, when the platforms are designed, it's all very quick, when young people are not even really [considered].” - Dr. Siva Mathiyazhagan


How have platforms evolved their relationship with parents?

“A couple things for TikTok are one, we see parents and caregivers as some of our most critical partners when it comes to education, working with them to educate and have conversations with their teens.

Part of our big mission with trust and safety outreach, the team that I'm a part of, is working with external voices including parents and caregivers including organizations like the Family Online Safety Institute who talk to parents directly and really leveraging and putting a microphone to some of those resources and making sure that they're available on our platform.

I think the other thing is we can't stop there and so a lot of it is based on what Ava and Dr. Siva said tonight. We're working with external voices, teens directly, to design the platform, design products, and design features that are actually taking some of those safety concerns into mind.

And so we do this in a number of different ways. One, a couple of our features I'll mention, I think are really dedicated to working with parents is our family pairing feature, which is really there to allow parents access and to be able to kind of control and support their teen on TikTok. And we continue to enhance that, but a big part of it is we've known in the last couple of years, we have got to educate parents and caregivers on the existence of those features and that they're available. And that's just been a big undertaking of ours. Just in the last year, we put a huge investment in educating around that specific feature for parents and have been able to reach like over 400 million folks, and so we're continuing to see adoption.” - Victoria McCullough

“I think it's really important that we educate parents and we bring them along in the conversation.

But I think the key word there is conversation. Too often we're seeing a huge gap between youth and caregivers and parents. Their first reaction is, okay, well, you just can't be on that app or I'm limiting this opportunity for you without having those important conversations, not just about the youth, where they stand on the app, or how they show up on the app, but just being a better digital citizen overall.

And so I think for us at Thorn, it was really important that we tackled that gap and we did so through building a resource hub, as you mentioned, called Thorn for Parents that allows parents to go online and see, okay, this is my first time having a conversation maybe with my nine year old. Where do I even begin?

And giving them scripts and dialogue, because that is key. It's not just about parental restrictions, it's not just about saying your child can't be on it, but really making sure you're having those important conversations, not just then at nine years old, but at 13, at 15, and so on and so forth." - Adele Taylor


Panel #2: What is the Current State of Trust and Safety?

Andrew Zack (Policy Manager, Family Online Safety Institute) hosts a panel featuring Vaishnavi (Founder and principal of Vyanams Strategies), Chanel Cornett (Senior Counsel on the Trust & Safety and Privacy teams at Zoom), Dave Byrne (Founder of TrustRaise) about the current state of Trust and Safety amid tech layoffs and an evolving tech landscape.

Highlights

What specific pieces of legislation have good Trust and Safety ideas?

“When I think of policy ideas that I support, they're along the lines of that theme of the trust and safety industry not working in a silo. I'm a huge supporter of third-party risk assessments. We've seen that across the globe with the DSA, there's a requirement there. In the past, platforms were possibly conducting these risk assessments or even research into the risks their platforms had on kids. But these were done in-house, right? What that meant was they were able to either hide the results or ignore the results or skew them and interpret them for their own purposes.

And so now with third-party risk assessments, these will be conducted externally. I think it's added lever of accountability for platforms and they won't be able to just hide. And governments will have oversight nd enforcement powers over these as well. And, also, to make sure that, there are mitigation measures in place to meet these risks and to actually get them addressed.” - Chanel Cornett

“I think of where there's actually good legislation, where there's good momentum, I'm going to say that a lot of it is right now coming out of Europe. A lot of it is coming out of the European Commission with DSA, it's coming out of the UK with their framework around AI regulation, it's coming out of Ireland with the Child Online Safety Act. I think there's good things that are happening, but I think when we look here in the U.S., there's still a lot that can be learned from what's happening there. They are really focused around transparency and accountability, which are essential, but also making sure that there's robust frameworks that true accountability can happen. That it's not just, “hey, we're gonna just end up arbitrating in court for years with different people that we think are going to be going against what these laws are,” there's going to be forums for actual discussion of like, “how do we not only look at what we have today, but how do we continue building on this collaboratively and as an industry?” - Dave Byrne

“I have one very clear favorite policy, a regulatory development of the last few years, and it is age appropriate design.

It's such a glorious mouthful of a phrase, and yet suddenly we all know it, we all talk about it, it's mushroomed into legislation across a number of countries. It's a thing in California and I am excited about it because it speaks to a concept that is pretty self-evident that young people should have developmentally appropriate experiences online.

I think most of us inherently have always believed that. We know that what a 13-year-old experiences online should probably be moderated in some way that's different from what a 30-year-old experiences online. But I think historically, we outsourced a lot of that work to parents, to educators, to social institutions, law enforcement. We didn't give them too many resources to really do it. I mean, we have great literacy and citizenship programs and guides to to help parents have these conversations. But I think with age appropriate design, we're finally seeing a recognition that if you are building for young people, if you are marketing to them, if you are profiting from them, surely you must design responsibly for them.” - Vaishnavi J


What lessons can we learn from the last 20 years of social media to inform safe Generative AI development?

“You're going to see transparency become a really strong drumbeat. We already see that. You're going to see people wanting to understand what goes into your models. How do you identify vulnerabilities? Will you open up access to researchers? The second thing, and this is really from the more of the youth perspective is, when products or policies are not explainable or understandable by the general population or the government, sometimes well-intentioned but bad regulation follows and it's accompanied by a lack of public trust.” - Vaishnavi J

“There is something to be said that when it comes to generative AI, a lot of kids actually know way more than their parents. It's great that we have these discussions about how we're supporting kids and generative AI, but I think there also needs to be education for parents as well on how to navigate these areas. Just to give you an example, my nieces, they will see a generative AI image and they'll go, “that's fake.”

My brother will go, “Wow, look at that, isn't that crazy?” That kind of education to make sure that parents and kids can have a conversation as well.” - Dave Byrne

“There are also some important lessons that we can learn about the use of AI in our work as well to prevent and respond to online harms. The biggest one in general is AI is not a replacement for humans. It's not sophisticated enough to understand evolving context, It will never be good enough as a human. I think it's a great tool that allows us to scale and that allows us to work more fast and more effectively. But I do not think that it should be used as a tool to replace humans in the prevention of bad content and awful things that occur online.” - Chanel Cornett

Please note: Quotes have been lightly edited for readability.

Scenes from Responsible Tech DC 📸

Previous
Previous

Cool Happenings in Responsible Tech! March 6th edition

Next
Next

New Careers in Responsible Tech This Week: AI, Tech Policy Fellowships, Content Creator, Project Manager, & More!