Trust and Safety Leader Interview: Vaishnavi J, former head of youth policy at Meta.

Vaishnavi J is the founder & principal of Vyanams Strategies, advising companies and civil society on how to build safer, more age-appropriate experiences for young people. She is the former head of youth policy at Meta.

Vaishnavi will be speaking at All Tech Is Human’s upcoming Responsible Tech DC: A Better Tech Future for Youth that happens on Wed, Feb 28th in Washington, DC. Learn more about this gathering here, and read a piece that Vaishnavi just wrote for Tech Policy Press (Red Herrings To Watch For At The Senate’s Child Safety Hearing) here. She also was a guest on Integrity Institute’s Trust in Tech podcast that you can listen to here.

Read Vaishnavi J’s full interview below.


“I dream of a tech future that sees the opportunity to design for all parts of the community as a gift. One that sees the complexity of building responsible tech as an exciting opportunity for innovation and growth, rather than a burdensome obstacle to moving fast and breaking things.”


What are the key topics on your end for 2024?

2024 will be a key year to define who is responsible for supporting young people in having healthy experiences online. Historical approaches have outsourced this work to parents and educators, instead of looking at the companies that build these products. That is changing as more countries look to implement legislation around safe product development, establish guardrails around data collection and targeting children, and increasingly require companies to build products in an age-appropriate way. The public conversation around responsible tech for youth is quickly evolving, and we are a part of those discussions.

Generative AI is also exponentially accelerating conversations around responsible and inclusive tech, considering issues such as data collection practices, mitigating bias in model design, and moderating interaction outputs, especially for young people. We are closely tracking these developments and informing the discussions with companies, civil society, and policymakers, around what future proof, innovative tech policy can look like.

Finally, we expect brand safety to take on heightened importance in 2024, as consumers increasingly expect the brands they patronize to model the values that they stand for, and are willing to back these expectations with their dollars. As global economies slow down, we see companies widening their definitions of brand safety, turning to us for assistance, and placing increasing weight on whether online platforms are able to protect their brand.

What are the key challenges you face?

Young people are coming online earlier and staying online longer than ever before. But online experiences were not designed with them in mind, so we challenge long-standing assumptions about how the internet should operate for them.

We do this in a way that also supports youth agency, privacy, and independence, without getting overly paternalistic and infringing on their human rights. Walking that tightrope takes deep experience in policy and product development, a lot of comfort with grey areas, and a willingness to engage with multiple stakeholders who may hold wildly differing views.

We also challenge the historical lack of inclusivity in tech, asking companies, policymakers, and civil society to hear directly from youth across a range of cultural and socio-economic backgrounds about their time online and what they need for healthier experiences. Current models of product and policy development have (among other factors) exacerbated experiences of child endangerment, social isolation, and harassment. We need updated frameworks rooted in inclusivity, built by cognitively diverse teams.

What are the most exciting Responsible Tech initiatives you're working on?

In 2024, I’m excited to be working more with companies that want to build responsibly from the outset, rather than patch up launched products once harmful experiences are observed. They’re also doing this in truly innovative ways and see this as an exciting product and policy challenge to solve, a great mindset with which to approach this work. It signifies a real shift in some of the industry’s thoughts around responsible tech, and groups like All Tech Is Human have been instrumental in driving that change.

I’m also excited to be having more thoughtful, informed conversations with policymakers and civil society, along with other folks from tech who can add more nuance to the proposals being considered than previously possible. The mass tech layoffs of 2022 and 2023 released a large number of practitioners into the field and have been a boon to the public conversation around responsible tech, since we worked for so long on these issues from within these companies. We are speaking about the tradeoffs in much more concrete terms than we ever have before.

What is one thing about your industry or sector you can demystify?

Historical discussions of how to support youth online usually revolve around either allowing or banning them from seeing certain content or behaving in certain ways with one another. This is a false binary; there are so many other ways to intervene in between these two extremes! We work with companies to identify the points at which harm mitigation is possible, the range of mitigations we can take, and how to measure the effectiveness of these mitigations.

What is your vision for a better tech future?

I dream of a tech future that sees the opportunity to design for all parts of the community as a gift. One that sees the complexity of building responsible tech as an exciting opportunity for innovation and growth, rather than a burdensome obstacle to moving fast and breaking things.

Previous
Previous

Responsible Tech Mixer & Data Action Day Highlights and Recap

Next
Next

New Careers in Responsible Tech This Week: Trust & Safety, Data Science, Privacy, Community & AI