New Careers in Responsible AI This Week!

All Tech Tech Is Human’s Responsible Tech Job Board curates roles focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interest. It is the premiere resource in the field for people exploring a career change, new role, or interested in the growing responsible tech ecosystem.

This week, we’re highlighting 10 roles in Responsible AI. We’re dedicated to mapping the Responsible AI ecosystem, highlighting the key organizations, roles, and people working to create a better tech future. Interested in learning more about Responsible AI? Check out our knowledge hub!


Assistant, Associate, or Full Professor – Generative AI and Sound Design/Conversation Design, Northeastern University

Northeastern University’s College of Arts, Media and Design (CAMD) invites applications for open-rank (tenured or tenure-track) faculty positions in generative AI for creative applications in music, communications and design fields. We seek applicants with a record of innovation in the fields of Artificial Intelligence (AI) and machine learning (ML), a strong background in working with the algorithms and models pivotal to generative AI, and whose research engages critically and creatively with such fields.


Research Associate AI & Geopolitics Project (Fixed term), University of Cambridge

Applications are invited for a Research Associate position on a new Bennett Institute programme, the AI & Geopolitics Project (AIxGEO). This two-year position offers an excellent opportunity for a researcher with a background in relevant disciplines and a keen interest in global AI collaboration and governance to contribute to research at the intersection of artificial intelligence and geopolitics. The successful candidate will also represent the AIxGEO at external events and help to organise workshops and conferences with senior individuals across the AI and geopolitical field. Applications from candidates with backgrounds in any relevant disciplines, with experience in a variety of research methods, are invited.


Junior Associate, Responsible AI, ALLAI

ALLAI is looking for individuals with a strong interest in responsible AI, who are eager to contribute to addressing AI’s legal, societal, and ethical implications. In your prospective role, you will work on several projects focused on ensuring the ethical and legal application of AI in the healthcare sector. Furthermore, you will have the opportunity to help design a certified training curriculum for the sustainable and responsible use of digital technologies in the public sector.


Associate - Responsible AI, nasscom AI

The position is open to candidates with a minimum of 3-4 years of PQE in a technology and policy role with a strong background in computer science, AI/ML, law, public policy, or a related discipline. Candidates should demonstrate a deep interest in AI policy and regulation, strong research and editorial skills, good command over verbal and written communication, and the ability to work with minimal supervision. Candidates with less than 3-4 years of PQE may be considered for the position in exceptional cases.


Lead AI / ML Engineer, Consumer Reports

CR is actively looking for a lead AI/ML engineer to join the Data Office to execute on a strategic multi-year roadmap focussed on generative AI. Consumer Reports is a mission based organization pursuing an AI Strategy that will drive value for customers, give employees superpowers, and address AI harms in the digital marketplace. 

The Lead AI/ML Engineer will leverage emerging data technologies and applied AI/ML techniques/frameworks to build reliable, scalable and maintainable production-ready applications.


VP, of Ethical AI Data Science Research, Salesforce

Salesforce's Office of Ethical and Humane Use is hiring an Ethical AI Principal to play a pivotal role in guiding the responsible development of groundbreaking artificial intelligence products. Working together with the Responsible AI & Tech team and in close partnership with both Salesforce AI and Salesforce Research, they will deliver mentorship, guardrails, and features that ensure the next generation of AI is crafted, developed, and delivered in alignment with Salesforce’s ethical use and responsible AI principles.


AI Policy Manager, Meta

Meta is hiring AI Policy Managers with expertise in privacy and AI policy issues to join its Privacy and Data Policy team and help build products, services, and technologies that promote the best interests of our users. The team's mission is to develop privacy-protective and innovative approaches to data and Meta’s services that help bring the world closer together and improve people's lives. In this role, you'll help shape the company's approach to AI across Meta’s suite of products and services.

More specifically, this role will be in the part of Meta’s team that is focused on developing policy around artificial intelligence/AI, working to advise Meta’s AI product and research teams on novel issues related to privacy and also broader questions of fairness, accountability and transparency related to AI and machine learning (ML).


Microsoft Research AI & Society Fellowship

The Microsoft Research AI & Society fellows program aims to catalyze research collaboration between Microsoft Research and eminent scholars and experts across a range of disciplines core to discussions at the intersection of AI and its impact on society.

Microsoft recognizes the value of bridging academic, industry, policy, and regulatory worlds and seeks to ignite interdisciplinary collaboration that drives real-world impact.

Through a global, open call for proposals targeting a specific set of research challenges, Microsoft will facilitate strategic collaborations, catalyze new research ideas, and contribute publicly available works to benefit scholarly discourse and benefit society more broadly.


Policy & Responsible AI Lead, Cohere

Cohere For AI invests in state-of-the-art research promoting responsible development and deployment of Generative AI systems.

Cohere’s team works to provide a technically grounded perspective to shape advanced recommendations for the governance of artificial intelligence, informing how it can improve risk identification and mitigation throughout the model development and deployment lifecycle. Through Cohere For AI it is also prioritizing key initiatives in educational resources that will help inform developers as well as product and design teams, to build LLM-powered products responsibly


Vice President for AI Trust & Safety

NewsGuard is seeking a senior executive to lead its Trust & Safety Partnerships with generative AI providers.

The new AI models have great potential, but for them to meet their enormous potential they must overcome their propensity to “hallucinate,” including to spread or create false claims relating to topics in the news.

NewsGuard’s Misinformation Fingerprints (its machine-readable catalog of thousands of false narratives circulating online) and reliability ratings for 30,000 online news sources offer unique trust data for fine tuning and guardrails for generative AI providers.

Previous
Previous

Responsible Tech Mixer Series November 2023 Highlights and Recap: Co-Innovating Healthy Digital Spaces with Youth and Industry

Next
Next

New Careers in Responsible Tech This Week: AI, Governance, OSINT, Policy, Fellowships, & More!