Tech Policy & Social Media: Where Are We Headed?

Fred Langford (Director of Online Technology at Ofcom), Harsha Bhatlapenumarthy (Governance Manager, Meta), Dhanaraj Thakur (@thakurdhanaraj), Research Director at the Center for Democracy & Technology, in conversation with moderator Kat Townsend (Director for Policy at the Web Foundation)

Panel discussion from our Responsible Tech Summit: Improving Digital Spaces held at the Consulate General of Canada in New York on May 20th. Find the full event overview here. For our fourth panel of the day, All Tech Is Human and the Consulate General of Canada in New York hosted a panel discussion on the state of technology policy around social media regulation, featuring Harsha Bhatlapenumarthy, Fred Langford, Dhanaraj Thakur, and Kat Townsend. The discussion focused on regulation and policy, content moderation, how to rebuild an inclusive internet, the role of legislation, hopes for the future, and how to engage communities in this important conversation.

Please see below for a brief readout. All comments and questions are paraphrased. Notes by Lama Mohammed

Q&A

Kat Townsend: So when we have this focus on online safety in the Western world, what is the potential for how those laws affect the globe? What are we missing? What are the harms we cause by unduly focusing on the discussion of regulation and policy?

  • Fred Langford: When talking about AI, it is all about different communities having a say. I think we're missing inclusivity by just having a limited number of languages. It also means that when we talk about some of the technologies that have been developed and how they're implemented, it's like having a huge gap in knowledge because you don’t have everybody's viewpoint and impact to draw into it. This is something that's missing and it's going to continue the impact we've seen if you don't deal with things at the beginning. This is going to get worse, then it's too late to then try and roll it back. Now is the time to game. This is why I love these sorts of events because we are getting everybody and all their different views together again.

  • Dhanaraj Thakur: There. Two things come to mind: 

    • One is the issue of English, which dominates the web and the Internet. The problems we are trying to address are moderate content and noting speaking languages. There are proposals and ideas to use limited data to automate content. Moderation in these non-English speaking languages, or what is referred to as low resource languages, could have lots of benefits, so we want to consider using these kinds of models and approaches.

    • Second, regulation or policy development happens in certain places like DC or Brussels, and the impacts they have are global. One trend is that those developments and innovations will take a lead, and often what you see is things like GDPR being viewed almost like a global data policy regulation. So countries look at that law as a mode, which can be both good and bad. The problem is which kinds of ideas get a head start, and which person gets to lead. It often drones out in our discussion from other policy innovations coming from other places in the global South and elsewhere. 

  • Harsha Bhatlapenumarthy: I think it’s about making sure that we have all kinds of backgrounds represented both when we are developing automation classifiers, or when we're developing policies because when that representation happens on the frontlines, we will see that in the outcomes.

Kat Townsend: For those of us who are gathered in New York state, this week 10 people were murdered by a white supremacist with an illegal semi-automatic assault rifle. The video was put up on social media and stayed up for 10 hours. The perpetrator, as we know, or learned, was inspired by other videos that were shared on social media. So what I’d like us to reconcile is, based on what we’ve heard, how difficult it is to have transparency? If we share data, it can be used by bad actors, so how do we reconcile with what’s happening on these platforms today? 

  • Harsha Bhatlapenumarthy: It’s a difficult question. I think the biggest challenge is not a new challenge, because we have a platform that supports millions of content that needs to be addressed and it’s about getting to the right content at the right time. There’s a lot that happens behind the scenes, but what we need is to make incremental improvements to eventually bring this 10 hours down to two hours. 

  • Harsha Bhatlapenumarthy: Simultaneously, how do we make sure we have enough guard rails that when this happens our response time is quick? Our response is a lot more agile and a lot more comprehensive. I say incremental improvements because I think when I started working in trust and safety, I started in content moderation, and I don’t think we had the scale of automation we have today. I think more than 90% of the content gets taken down through automation — we've come a long way and I think our focus should continue to be in that direction

  • Fred Langford: It's interesting. Ofcom will be regulating systems, processes, and risk. But the challenges are broader than just individual platforms and how that sharing can take place on the Internet. We should be thinking globally, but that's not necessarily clear for a lot of policymakers.

  • Fred Langford: How can things be improved? What happened was there were ways it could be improved, but it's not always as straightforward as many people think. I ran a team of analysts for nearly 20 years, looking at serious content, so I'm used to these sorts of patterns reoccurring again, but it's also something that's always changing. It's an arms race and things do change as the technology changes and the way platforms connect changes. 

  • Fred Langford: The Bay people innovate by using platforms for nefarious reasons because they are not trying to create this technology out of altruistic reasons. I do have sympathy for platforms when they're trying to do their best, and people are gaming them. We were talking about networks of people out there who are trying to work out ways of getting around the checks and balances all the time. They're sharing them with colleagues and then they're working in small groups. We're already thinking of the insistence and processes, but if there's no will to keep pushing that and get better, that's where we can step in and push a little bit more. Ofcom is innovating and trying to identify where those gaps are and encouraging all parties to work together.

  • Dhanaraj Thakur: I think the arms race description is accurate. I think there are lots of improvements that have been made in terms of automation. At the same time, there are constantly bad actors trying to game the system. What's important at this time, especially for policymakers and others, are the limitations and risks that come with automation and content moderation systems. Developers need to communicate that to the public at large.

  • Dhanaraj Thakur: What we would not want to see is the law. For example, mandating some kind of filtering or automated content on specific kinds of content because none of these things are full. I think we all understand communicating and making sure that the narrative is accurate. An AI model is going to capture all this content and part of this is stemming from society at large — it’s not just a technical issue here. Recognizing that would also mean that there are limitations to what kinds of technical solutions can be put forward, like the kind of social media company that solves this problem by itself. 

  • Dhanaraj Thakur: There's a wider set of issues here, and what might be more useful is for social media companies, policymakers, and regulators not to be engaged with those other actors that have been working on these for decades and years. Social workers and other kinds of community groups that represent underrepresented minorities are best suited for this.




Kat Townsend: If we need that kind of dynamic centered on diversity, who is missing from these conversations? What we've seen and what a lot of our research has been working on is people who get harassed and targeted online are women, minorities, and marginalized groups in general. The problem when this happens does not only cause harm, but it also means that you lose someone's creativity and you lose their voice. I wonder if you can share some of the work that you all have been doing in how we're building social media platforms or the web in general that embraces everyone working together so that gender-based violence, harassment, and targeting are curved when we're building the web.




  • Dhanaraj Thakur: There is lots of evidence and research going to this severe kind of mental, physical, and intergenerational harm that comes from harassment, which then deprives many communities globally. Black women, for example, are more likely to face harassment on Twitter than any other kind of group. There are disproportionate impacts even within this broad area of harassment when we focus on particular communities. It's important to recognize those kinds of disproportionate impacts. There are areas in which progress isn’t made in terms of trying to identify harassment, addressing harassment reports and mechanisms, and so on. 

  • Dhanaraj Thakur: The higher level is the kind of reflexivity that's required among those groups that are often out outside — I'm talking specifically of CIS and straight white men were often not, or less likely to be suffering from these kinds of abuse, and are often in parallel parts, or have the power in many of these companies, within government, or in society to make decisions our own. The impacts are the directions that solutions for these kinds of abuse. It comes down to that kind of reflection among men such as myself to recognize the severe problem and act upon it. It’s also important to not just rely on others and those that are being harmed to take the lead.

  • Harsha Bhatlapenumarthy: Thinking about how to make it more inclusive for everyone, including women, ties back to my point of how do we have everyone at the table when we're solving these problems? Going back to the conversations on transparency, we must ask how we can empower all sections of the society with an understanding of what is happening behind the scenes when we're building the processes and making decisions.

  • Fred Langford: The list of people that are always missing are children, they continually get forgotten. These sorts of considerations are built into things like risk assessments. Consider these sorts of minorities that aren't being well represented as part of the risk assessments and are going to come in as part of our regulation. I think from that perspective, it's very much pushing to make sure that those voices are heard from a regulatory perspective.




Kat Townsend: A brief follow-up about that. When you focus on children online, you disaggregate the data between who is a child online and who's an adult. Then you're segmenting a population that can be targeted. How do you reconcile with that?




  • Fred Langford: I would say it’s supported. I think it's key that people do know who's on their platform and how to support them because if you don't know who's on the platform how can you support them? I would push back a little bit on that point, and say, no — it's about supporting not targeting.

Kat Townsend: I’d love to know what is making you all hopeful? What do you get excited about, whether it's your work or others?

  • Harsha Bhatlapenumarthy: TPSA — the Trust and Safety Professionals Association. It's a member-based association that is focused on advancing the practice of trust and safety. I’ve been volunteering with that company for about one and a half years now, and one thing I’m very excited about is (tying back to our discussions around transparency) that we're developing a curriculum which outlines all the practices of trust and safety. Their activities go behind the processes and the decisions that we make. And I think that it's a great way of creating very macro-level transparency. You're taking because of transparency reports, you see data and see numbers, and understand what happened in the last quarter. Things like that curriculum give you a real insight into what happens. We get to answer questions like, “what is the thought process?” and “Why make the decisions we make?” That's something I'm very excited about, as well as discussions focused on transparency.

  • Dhanaraj Thakur: I think there is progress that's being made. One thing that's been very crucial concerning understanding or even addressing all the issues, is independent researcher access to data from social media companies. This supply is not just academics, but those of us were researching civil society regulators. 

  • Dhanaraj Thakur: I think there's momentum now and legislation in DC and Brussels to address this specific topic. I think the momentum around this has a lot of details that need to be worked out like where exactly is a researcher? What are the means of access? What kinds of data should be shared in a privacy-preserving way? But there is political will on both sides of the Atlantic. We need to make some progress and there's a lot of work that we need to be engaged in.

  • Fred Langford: I think people working in trust and safety feel like they've been in the basement for too long, and what's optimistic is they're coming out now. We want to work together, to deal with it, and that's what makes me optimistic. People are emerging and recognizing that these things are coming out now. I know that these sorts of topics have been discussed for some time and they're coming to the form we were given to tackle them, and that's what's keeping me excited. I enjoy having a lively debate on how we can tackle these together and decide the best path.




Kat Townsend: So as we're designing that best path and legislation that can perhaps come into place, how do you operationalize policy? How do you train this next generation? What do you see?




  • Fred Langford: There are lots of challenges in operationalizing and I come from an operational background as well. Definitions are key to understanding that there are a lot of nuances. Things aren't going to be possible as people think. They actually might not be possible, and sometimes you don't even realize that until you start. I think that's why our regulation is potentially going to work very well because it looks at the systems and processes that we would engage with to know what's happening. 

  • Fred Langford: We’re asking the important questions and they won’t be easy ones if we talk about computer vision and some of the challenges when you're trying to identify between reality and an algorithm. There are many technical challenges and I think that as the technology develops and as people work out ways to try and gamify them, they outgrow speed.

  • Harsha Bhatlapenumarthy: I have worked in the past to operationalize something new where it has created an influence on a certain team's work or if it directly impacts content moderators. I think about it in four steps. 

    • You see a problem, you create a policy.

    • When you attempt to enforce policy, then you monitor and control metrics to see how the policy is performing.

    • Between the step of creating policy and operationalizing it, I think that's where your question goes back to how it impacts content moderators or teams working on it — we have a pretty streamlined way of doing that now.

    • Because the industry has a set of standard procedures and practices, it relies a lot on setting the content moderator up for success, and a lot of it depends on that.



  • Dhanaraj Thakur: We can take research evidence and make it accessible to policymakers and regulators to inform this operationalization phase. We help create definitions to help decide if that particular company has an algorithm with computer vision or a working algorithm, as well as identify the risk. We still need a lot of improvement. We still see legislative proposals making huge, sometimes weird assumptions about the deficiency of error algorithms, or what they can or cannot do. We still need to bridge that connection between research to inform this and that's something we will work on, but I think there's still more to be done there.

Kat Townsend: What’s your perspective on the role for those who aren't in this room and actively engaged in this work every day? How do we empower or work more broadly so that everybody who would want to be involved or wants to be able to contribute? How do you have a system for those who are not hyperconnected to help, support, and shape the online world and social media platforms that facilitate us to grow and thrive as a people?

  • Fred Langford: Is it difficult for me to answer because we're in the middle of that phase at the moment. There is a debate going on about how we know what that regulation is going to be like. We have a democratic process and elected officials are talking about that. They’re making the decision, and as a regulator, we implement that decision. As a regulator, we're always listening and always trying to convene stakeholder engagement and talk to different people, to understand, and how to interpret it. At the moment, particularly around online safety, we're in a bit of a sort of gray zone. It’s very much about waiting to see what that final decision comes out of the actual Parliamentary process before we can try and work out how to best put those voices in.

  • Dhanaraj Thakur: I think this is an ongoing problem because we want that inclusivity, and we have different tactics and methods. We should co-design by working with different groups and encourage stakeholder engagement. There is always going to be some kind of intentionality to make sure that we are going to go all the way to get different people and the voices in the room. The whole diversity and inclusion problem is almost like a first step before we get to envision a solution. 

  • Harsha Bhatlapenumarthy: It also comes down to a two-way street. As an industry, or the trust and safety community, I think we should do outreach and make sure that we are making sure that we have the right kind of voices at the table. I also think that folks who are already partially involved in this space should explore opportunities to participate more in these conversations by coming to forums like these and sharing your thoughts. I think having more conversations like these is also a good first step to diversifying opinions, thoughts, and perspectives.

Previous
Previous

Fireside chat with Yaël Eisenstat

Next
Next

Algorithmic Transparency in Digital Spaces