New Careers in Responsible AI This Week!

All Tech Tech Is Human’s Responsible Tech Job Board curates roles focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interest. It is the premiere resource in the field for people exploring a career change, new role, or interested in the growing responsible tech ecosystem.

This week, we’re highlighting 10 roles in Responsible AI. We’re dedicated to mapping the Responsible AI ecosystem, highlighting the key organizations, roles, and people working to create a better tech future. Interested in learning more about Responsible AI? Check out our knowledge hub!


Senior Responsible AI Program Manager, Microsoft

The Sensitive Uses program within the Office of Responsible AI is responsible for collaborating with teams across Microsoft to provide consulting and issue policy guidance for Microsoft’s most sensitive and cutting-edge AI products, initiatives, and partnerships. From working alongside engineering colleagues on product development to engaging with research and sales teams from around the world, Sensitive Uses is where responsible AI principles meet real-word practices.

As a Senior Responsible AI Program Manager on the Sensitive Uses team within the Office of Responsible AI, you will provide internal consulting and assessment of high-impact AI use cases and support all facets of the Sensitive Uses program. You will evaluate AI-driven products to assess risks, define requirements, develop strategic initiatives, manage sensitive use case management programs and processes, and socialize policies that support Microsoft’s ability to develop and deploy AI systems safely and responsibly. In doing so, you will partner with policy, research, sales, and engineering stakeholders to rapidly adjust to the changing AI and policy landscape and drive new requirements across product teams.


AI Privacy and Security Project Manager, Sony AI

Sony AI is currently seeking an experienced and self-motivated Project Manager who loves taking challenges in fast-paced environments. In this position, you will collaborate closely with AI privacy and security research scientists, engineers, and various high-impact business units across the Sony group to drive multiple AI privacy and security initiatives. This role encompasses both technical program management and general project management responsibilities, adapting as required to ensure successful outcomes.


Principle Software Engineer, Generative AI, Mozilla

Principal Engineers are industry experts in their domain. They help define Mozilla’s product strategy and goals affecting multiple teams and turn our strategy into coordinated action for those teams. They mentor others through transfer of responsibilities to more junior engineers so they can take on new ones, while collaborating with management on building team consensus and providing direction. 


Research Assistant - Wadhwani Center for AI and Advanced Technologies, Center for Strategic and International Studies

The CSIS Wadhwani Center for AI and Advanced Technologies seeks to answer vitally important questions about the future of Artificial Intelligence and its implications for the global economy and international security. The Wadhwani Center for AI and Advanced Technologies is hiring a Research Assistant (RA) who is highly motivated, professional, and has a strong interest in supporting the research efforts of the Wadhwani Center. The RA will be responsible for a wide range of project components, including supporting the coordination of events and providing research, analytical, and writing support for the Wadhwani Center’s director and fellows.

This position is research intensive and therefore requires excellent time management skills and careful attention to detail. Candidates should function well in a fast-paced team environment, adapt quickly to a wide variety of research and programmatic tasks, and have strong written and oral communications skills. Team members will have regular opportunities to contribute research, input, and ideas for analytical pieces.


OECD.AI Policy Observatory Intern,

The OECD Internship Programme is an opportunity for highly qualified and motivated students with diverse backgrounds to gain valuable experience honing their analytical and technical skills in an international environment.  Preference will be given to interns being able to commit full-time, on-site, for a period of six months.  


Senior Policy Counsel or Director: Data, Decision Making, and AI, Future of Privacy Forum

FPF is seeking to hire a Senior Policy Lead to support its work on artificial intelligence, machine learning, and other decision-making systems. 

This role will support the organization’s portfolio of projects exploring the data flows driving algorithmic and AI products, including generative AI, the possible harms, and the available mitigation measures. 

Key responsibilities for the position include: working with the FPF team to help define and execute the strategic vision for FPF’s Data, Decision-Making, and Artificial Intelligence workstream and portfolio; growing and managing a team of experts at FPF to support this vision; staying abreast of the latest developments in the related technology areas; analyzing and writing focused on the ethical, legal, and policy issues presented by these developing technologies, with an emphasis on automated decision-making systems; leading outreach and collaboration with FPF stakeholders, interest groups, civil society organizations, relevant government agencies, academics, and other key partners; and organizing events and meetings with stakeholders, both in-person and virtual. 


AI Governance Fellow, Center for Democracy and Technology

The Center for Democracy & Technology (CDT) is seeking a Fellow with research and/or applied technical expertise on issues relating to the governance of artificial intelligence. The Fellow will contribute to CDT’s broad body of work on the responsible design, testing, monitoring and regulation of AI systems.

The Fellow will develop original research and writing on questions that are core to current AI governance efforts in the U.S., EU, and globally. Example topics include effective approaches to bias measurement, transparency and explainability, and frameworks for impact assessments and auditing. Fellows are also welcome who wish to focus on safety measures for AI systems, such as safety evaluations, red-teaming, watermarking, and approaches to support open-source releases while addressing safety risks.


Communications Director, Center for AI Policy

As Director of Communications at Center for AI Policy you’ll work closely with their executive team to design and execute a strategy for drawing positive attention to CAIP and its legislative agenda. You’ll help CAIP make key decisions about where to focus their media efforts, what audiences CAIP should be trying to reach, what messages are most important for those audiences to hear, and how to frame those messages.

A typical week would involve:

  • Drafting op-eds, memos, press releases, endorsements, and other public statements

  • Identifying and acting on opportunities to promote CAIP and our ideas

  • Reaching out to print media, TV, radio, and podcasts to propose and arrange interviews

  • Maintaining a blog and/or social media presence for CAIP

  • Following discussions of AI risk in government and in the media

  • Working with CAIP Government Relations Director and lobbyists to support their political strategy


Senior AI Product Counsel, ByteDance

ByteDance's AI Product Counseling team provides front-line legal support to the company's AI products and services. The team is growing fast and seeking highly experienced, bright, and capable product counseling professionals to join us. The role will be an integral resource for ByteDance's AI product and business teams by reviewing new applications, features, functionalities and initiatives and providing guidance, from the global perspective, related to legal rights implications and risk mitigation strategies. The position will work closely with and report to the Head of the AI Product Counseling team.


Staff AI Research Scientist, Duolingo

Duolingo is searching for an AI Research Scientist experienced with recommendation systems to join our efforts to personalize learning. Duolingo has a unique opportunity to define the future of personalized education with one of the world’s largest datasets. As an AI Research Scientist, you will apply your background in AI and machine learning to invent the technologies that will define the future of education. In many cases, this means identifying entirely new research problems, or synthesizing work that spans several fields. You will find opportunities to improve existing systems – or reinvent them completely – to tackle Duolingo's unique data.

Previous
Previous

Key Takeaways: All Tech Is Human, McGill University, and the Consulate General of Canada in New York’s Participatory Democracy to Govern Big Tech: The Canadian Experience

Next
Next

New Careers in Responsible Tech This Week: AI, Research Fellowships, Trust & Safety & More!