New Careers in Responsible Tech This Week: Responsible AI, Trust & Safety, Product Management & More

All Tech Tech Is Human’s Responsible Tech Job Board curates roles focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interest. It is the premiere resource in the field for people exploring a career change, new role, or interested in the growing responsible tech ecosystem.

This week, we’re highlighting 10 new roles added to the Responsible Tech Job Board! To view hundreds of openings in the field click the button below!


Lead Product Policy Manager, Spam, Pinterest

Pinterest is looking for an enthusiastic and thoughtful team player to join its Policy team as a Lead Product Policy Lead for spam. The spam world is full of dynamic adversaries who change tactics regularly to avoid detection and we need to stay a step ahead of bad actors to keep Pinterest the inspiring place our users love. You will draft policies that define spam on Pinterest, create detailed policy enforcement guidance to help its Operations teams fight spam, and collaborate closely with many cross-functional partners across the company, including Product and Engineering teams, to tackle spam and abuse at scale. This role will sometimes involve reviewing graphic and disturbing content, discussing difficult subjects, and responding to escalations.


Data Science Manager for Wellness and Resiliency, TaskUs

The Data Science Manager for Wellness and Resiliency oversees data projects and manages a team of expert data analysts/ scientists in the space of content moderation and customer experience. Responsibilities include…

  • Proactively engage with stakeholders and anticipate stakeholder needs.

  • Contribute to the strategic planning in both data strategy and service/product strategy.

  • Prioritize and conduct projects that vary in approach, scale, scope, timeframe and methodology across multiple work streams, making sure to work on the right project at the right time.


Technology & Human Rights Intern, Amnesty International

The purpose of this assignment is to support the delivery of policy and research projects under Amnesty Kenya’s technology and human rights program. The successful candidate will join Amnesty International’s Technology & Human Rights team working to strengthen data governance in Kenya and on digital rights for children and young people.


Lead Product Manager - AI GRC Content Strategy, Credo AI

Credo AI is seeking a highly motivated, policy-minded person to join its product team on the journey to address the world’s biggest challenges in scaling responsible AI.  As the Product Content Strategy Lead at Credo AI, you will play a critical role in shaping Credo’s Governance, Risk, and Compliance (GRC) content strategies. A member of the Credo AI Product team, you will be responsible for the delivery of Credo’s in-product risk and compliance content as a core component of our Governance Platform.


Research Scholar (Special Projects), Centre for the Governance of AI

Research Scholar is a one-year visiting position. It is designed to support the career development of AI governance researchers and practitioners — as well as to offer them an opportunity to do high-impact work.

As a Research Scholar, you will have freedom to pursue a wide range of styles of work. This could include conducting policy research, social science research, or relevant technical research; engaging with and advising policymakers; or starting and managing applied projects.


Head of Policy, SaferAI

SaferAI is looking for a French speaking Head of Policy with a strong ability to do government affairs work, especially in France. The Head of Policy will be working closely with the leadership and with senior advisor Cornelia Kutterer (former Senior Director of EU Government Affairs at Microsoft) to engage with relevant stakeholders and develop strategic partnerships. You will be a key enabler in ensuring that SaferAI work is distributed to stakeholders who need it and in ensuring that the coming French AI Summit covers safety adequately.


Software Engineer, BlueDot Impact

BlueDot Impact is looking for a software engineer who is obsessed with building high-quality experiences for the students in its courses. You’ll identify challenges students face when upskilling and pursuing impactful opportunities, you’ll propose solutions to those challenges, and you’ll build products that accelerate our students’ impact.


Trust & Safety Lead, Snap Inc.

Snap Inc. is looking for a Trust and Safety Lead based in its London, UK, office to join its global team. This people management role requires a skilled and unflappable decision maker who has a passion for developing safety workflows and partnerships in order to improve the experience for Snapchatters everywhere. You’ll become a subject matter authority in multiple domains by owning projects from beginning to finish and facilitating cross-team communication with the Legal, Customer Operations, Global Safety Partners, and Engineering teams to help keep Snapchatters safe.


Child Safety Enforcement Specialist, Trust and Safety, Google

Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of Google’s products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and its partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.


AI Policy Analyst - Tooling, Responsible Artificial Intelligence Institute

Responsibilities for this role include…

  • You will play an active role in developing and scaling the delivery of RAI Institute’s responsible AI guidance, including assessment reports and policy tools. 

  • You will work in project management and policy support capacities with technical staff to develop semi-automated and automated tools and platforms utilizing or operationalizing RAI Institute policy assets. 

  • You will guide the development and roll-out of RAI Institute’s certification program and lead the development of accompanying certification guidance. 

Previous
Previous

This Month in Responsible AI: Hallucinations, Privacy Concerns, and New Policy

Next
Next

Leading Responsible Tech Organization All Tech Is Human Makes Key Hires to Expand Its Social Impact Career Work