New Careers in Responsible AI This Week!

All Tech Tech Is Human’s Responsible Tech Job Board curates roles focused on reducing the harms of technology, diversifying the tech pipeline, and ensuring that technology is aligned with the public interest. It is the premiere resource in the field for people exploring a career change, new role, or interested in the growing responsible tech ecosystem.

This week, we’re highlighting 10 roles in Responsible AI. We’re dedicated to mapping the Responsible AI ecosystem, highlighting the key organizations, roles, and people working to create a better tech future. Interested in learning more about Responsible AI? Check out our knowledge hub!


Scientific Project Officer – AI and Algorithmic System Inspections, European Centre for Algorithmic Transparency

The jobholder will join an interdisciplinary and multicultural team of applied researchers working in DSA enforcement activities. For this team, we are looking for specialists in algorithmic systems and artificial intelligence. The jobholder will work in close collaboration with a wide range of partners, in particular legal and policy experts in the DSA enforcement units of the European Commission, in DG CONNECT.

S/he will be part of cross-department teams during DSA compliance investigations and will interact at the technical level with very large online platforms and very large online search engines. The ideal candidate would have an in-depth knowledge and hands-on practical experience with modern algorithmic systems and artificial intelligence systems, such as those used for content analysis, recommendation and generation.


Technology & AI Policy Research Associate, National Journal

National Journal’s Presentation Center is hiring a Technology and Artificial Intelligence Policy Research Associate to join its team of experts producing insight and intelligence on how Washington works. 

This position will primarily focus its research on the artificial intelligence industry and emerging trends within it, contributing heavily to our PolicyView: AI report. They will explore other policy areas including research on elections, courts, and Congress, contributing to National Journal’s Presentation Center. The right person for this position will be able to effectively research, summarize, analyze, and visualize information about policy and policymakers. They will also have a passion for policy and legislation around artificial intelligence.


Research Data Scientist, AI Hub, NYU

NYU is seeking to recruit a Research Data Scientist for the Institute with a special focus on our Artificial Intelligence (AI) Hub. The AI Hub at McSilver has been established to investigate how artificial intelligence-driven systems can be used to equitably address poverty and challenges relating to race and public health, and to provide thought leadership on the implications. The AI Hub at NYU McSilver will address a dearth of information about how AI can impact the lives of people in marginalized communities. Among the hub’s initial areas of focus will be building on the institute’s work to answer whether AI can be used to better predict suicide rates and behaviors by race, geography, income and other demographic variables, with other innovative public health research and interventions to follow.  


Associate Director - EMEIA Public Policy - Artificial Intelligence (AI) Policy and Regulation, EY

Opportunity to join EY’s EMEIA Public Policy team. The EY Public Policy function works to advance EY’s strategic objectives in a number of areas, including policy related to audit and assurance; sustainability; financial services; technology (artificial intelligence, cybersecurity, data rights, data localization, and other policy matters); and geopolitics.  

The Associate Director will provide subject matter expertise and play a core role in the delivery of the EMEIA Public Policy team’s work on matters relating to the EU’s AI Act and related policy issues and on other priority AI policy topics across key EMEIA jurisdictions. In some cases, the Associate Director will also contribute to global public policy projects, engaging with the wider EY AI Strategy.    


FARI Postdoctoral Expert AI Regulatory Sandboxes, FARI

Regulatory sandboxes aim to test new technologies transparently and to contribute to evidence-based lawmaking. This provides both public and private actors to assess their services, products and procurement processes from the perspectives of (new) regulatory regimen, and at the same time, to regulators to identify possible challenges to new regulation, such as that emerging around AI and autonomous systems. The goal here is to promote the development of innovative artificial intelligence solutions that are both ethical and responsible. The sandbox helps individual organizations ensure compliance with relevant regulations and the development of solutions that take human rights and principles into account.

This research will be performed in association with FARI, and will be focusing on the legal aspects of the set-up of a regulatory sandbox. Specifically, this project aims at developing a prototype for a regulatory sandbox, and it will be part of a larger initiative around implementing a regulatory sandbox for the Brussels Capital Region (BCR). This will require close collaboration with other ongoing projects looking at (i) the technical aspects of this venture, and (ii) the establishment of a high-level strategy for a wide implementation of a regulatory sandbox at the BCR, considering the knowledge from other researchers, stakeholders from the public administration, and relevant and authorities. Ongoing initiatives also include AI use-cases, which will allow to test the regulatory sandbox, and provide feedback on its implementation.


Program Manager, Artificial Intelligence Engagement, Pulitzer Center

The Pulitzer Center is seeking a dynamic and creative Program Manager, Artificial Intelligence Engagement to join its fast-growing initiative on AI accountability. You will shape, design and coordinate our audience-engagement strategy for the high-impact body of AI accountability reporting produced by Pulitzer Center journalist grantees and fellows worldwide, with an emphasis on the Global South. 

The Pulitzer Center launched its AI Accountability Network in 2022 to expand and diversify the field of journalists equipped to report on one of the most consequential technologies of a generation. Our approach is collaborative, cross-border, and interdisciplinary. In this position, you will join an ambitious and diverse team of outreach, communication and education specialists who amplify the life and reach of the stories we support through partnerships with artists, schools and universities, civil society organizations, content creators, and more.


Lead ML Engineer, soal

Requirements:

-A strong engineering background and knows how to build and maintain high performing applications (bonus if it has been at an early stage start-up)

-Previous experience with ML models and training tools (minimum 2-3 years)

-Excited by the opportunity to build on the cutting edge of AI capabilities

-Experience with cloud infrastructure, computer systems architecture, computer vision, machine learning and MLOps is a plus, but more important is a working knowledge base, a hunger to learn and iterate quickly.

- Read more by clicking through!


Senior Advisor, Data & AI, Tony Blair Institute for Change

The Data & AI, Senior Advisor will have the political acumen to convincingly engage senior officials on the political importance and transformative power of data-driven governments and the growing impact of AI. Their experience advising governments or directly applying insights from data into a public sector context should be supported by a strong grasp of AI and other emerging technologies. They will be able to communicate how next-generation AI systems, Large Language Models, and advanced data analytics enable governments to be more adaptive and agile, while also stimulating innovation and growth throughout their countries more broadly.


Trust and Safety analyst, Generative AI, Google

As a Trust and Safety Analyst, you will identify and take on the problems that challenge the safety and integrity of our products. You will use technical know-how, problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products.

In this role, you will work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed with urgency. You will work hard to promote trust in Google and ensure the highest levels of user safety.


Project Manager, AI, International Rescue Committe

The International Rescue Committee (IRC) responds to the world’s worst humanitarian crises and helps people whose lives and livelihoods are shattered by conflict and disaster to survive, recover, and gain control of their future. Housed within the Emergency and Humanitarian Action Unit (EHAU), the Signpost Project is a rapidly scaling community-led information service that empowers its clients in times of crisis. Signpost delivers critical information to affected populations through staff equipped with digital tools, digital channels and social media — providing communities with timely and actionable information to make critical decisions on the issues that matter most to them.

Previous
Previous

New Careers in Responsible Tech This Week: Data Privacy, Public Interest Technology, AI, Fellowships & More!

Next
Next

New Careers in Responsible Tech This Week: AI, Trust & Safety, Accessibility, Data Governance & Internships