Key Takeaways From Responsible Tech London Roundtables 🇬🇧

By Elisa Fox

In December 2023, All Tech Is Human returned to London where we held the Responsible Tech London Summit and two curated roundtables: one with Ofcom, and one with Meta’s Oversight Board. 

Responsible Tech London Summit was held in partnership with Crisp, a Kroll business, on December 5th, and brought together over 200 tech professionals from government, civil society, academia, and industry for two tech policy panels and a networking reception. The same day, All Tech Is Human partnered with Ofcom to hold a roundtable on age assurance and its emerging challenges and solutions. It provided a unique opportunity to share insights and contribute to the ongoing dialogue on age assurance, a critical aspect of Ofcom's regulatory efforts.

The next day, we co-hosted a roundtable with the Oversight Board for Meta to discuss how content moderation might develop in response to European and global regulatory changes. Attendees also considered how these changes can protect against threats like disinformation.

All gatherings provided an opportunity for stakeholders from various sectors, disciplines, and backgrounds to share in thoughtful discussion, find synergies for future collaborations, and share solutions to common challenges. 

We’ve pulled together some key takeaways for professionals in trust and safety and tech policy when it comes to building a safer internet.


Responsible Tech London Summit

We had two panels at the Summit with leaders from places like the University of Oxford, Business for Social Responsibility, and the Royal Society. The first panel, “Reducing Harms Against Children,” looked at ways tech platforms, guardians, and policymakers can mitigate potential online risks to children, including ways to better understand and measure the current harms and impact. The second panel, “How Will the Online Safety Act Influence the Internet?,” looked at the strengths, challenges, and potential impacts of the recently passed UK Online Safety Act.

Key Takeaways:

  • Establish shared values and goals to measure success as well as unintended consequences: Echoing All Tech Is Human’s September 14th discussion with McGill University, establishing a shared vision of a safer internet based on values, such as human rights and inclusivity, can provide a common goal for stakeholders. Including metrics and KPIs in the implementation phase can provide insight into success and illuminate any unintended impacts along the way.

  • Urgent need to expand understanding outside a Western context: 90% of youth live in the global majority while some governments use technology to suppress speech and dissent. We must expand research on countries outside North America and Europe to better understand technology’s different impacts and use cases. Simultaneously, we must continue to amplify existing work and initiatives in the global majority.

  • Safety requires a multi-pronged approach: There is no silver bullet that can create a safer internet. Technology platforms must continue to proactively understand the risks on their platforms, parents and guardians must think critically about what they are giving their children access to and when, and regulators must continue to evaluate if existing legislation is creating the intended outcome or if adjustments need to be made.


Roundtable with Ofcom on Age Assurance

The roundtable with Ofcom was held following their recently released guidance on age assurance. With the guidance covering all platforms of different sizes, at different development stages, with different resourcing, and everything in between, there were plenty of topics for attendees to discuss. Both technical and non-technical aspects of age assurance implementation were examined as well as implications on privacy.

Key Takeaways:

  • Age assurance can have both a technical and societal approach: Companies have a responsibility to install barriers, such as age verification features, to entry to ensure that content is not reaching underaged youth. Likewise, parents and guardians can help prevent their children from accessing content inappropriate to their age by learning more about online safety and the safety tools available to them.

  • Messaging campaigns can be an important part of public awareness: Messaging campaigns can help inform the public of online safety regulatory changes and what it might mean for them and their community as well as raise awareness of online safety issues. It also can provide an avenue for legislators to share practical ways the public can keep themselves and children safe online.

  • Reusability needs to continue to be a topic for conversation: Many questions around the reusability and interoperability of age verification features remain, including the best ways to maintain privacy. Stakeholders must continue these conversations to articulate better what implementation might look like.


Roundtable with the Oversight Board for Meta

With recent developments in the EU and UK, as well as elections coming up around the world, the Oversight Board for Meta and All Tech Is Human held a roundtable to discuss how content moderation can help further democratic values. The roundtable provided a forum for civil society, industry, academia, and government to discuss the challenges of digital governance, find areas of collaboration, and begin to exchange potential solutions.

Key Takeaways:

  • Meet users where they’re at when launching public campaigns: While digital literacy campaigns and education can help combat disinformation, such programs can only be impactful if they meet the community where they’re at. This may mean prioritizing WhatsApp messaging in India or newspaper ads in South Africa. We must meet users with technology that is accessible to them if public messaging is to be successful.

  • Expand language options to increase engagement: In order for all stakeholders to engage in issues such as combatting disinformation, language options for programs and campaigns must include non-English options. Since the global majority user may interact with information and use platforms differently than those in North America and Europe, their experiences must be included in the conversation.

  • Regulation should include access to platform data: Data can help provide insight into how tech platforms are influencing our lives. Most access is currently voluntary, making it inconsistent at best and non-existent at worst. Mandated access to data, such as in the Digital Services Act, can fill this gap.

  • Create more legal safeguards for researchers: Researchers may choose not to study hot-button issues, such as disinformation, if these issues put their professional and personal lives at risk. This limits research outputs that might otherwise constructively inform regulation and legislation. Safeguards for researchers must be put in place and have legal, financial, and regulatory backing. 


About Elisa Fox
Elisa is a Program Manager at All Tech Is Human with a decade of program management experience in a variety of sectors ranging from higher education to the think tank space. Her past work and research have focused on cyber policy in the global south and ways to bring different perspectives into the policy conversation. Elisa holds a B.A. in Politics and M.S. in Global Affairs from New York University.


About All Tech Is Human

All Tech Is Human is a non-profit committed to building the world’s largest multistakeholder, multidisciplinary network in Responsible Tech. This allows us to tackle wicked tech & society issues while moving at the speed of tech, leverage the collective intelligence of the community, and diversify the traditional tech pipeline. Together, we work to solve tech & society’s thorniest issues.

Previous
Previous

Ambassador Amandeep Singh Gill Will Speak at Strengthening The Information Ecosystem on March 6

Next
Next

New Careers in Responsible Tech This Week: Trust & Safety, Software Engineering, Communications, Analysts & More!