Changemakers → Clara Tsao


Clara_Tsao_profile_yellow.png

CHANGEMAKERS

Clara Tsao

Clara Tsao is a national security/disinformation expert and technology entrepreneur currently building an association focused on the trust and safety profession. She is the President of the White House Presidential Innovation Fellows Foundation, is a Senior Advisor at the UN-CTED-backed Tech Against Terrorism, and has held various national security and tech policy roles for the US Government, Google, and Mozilla. Clara spoke with Andrew from All Tech Is Human about online disinformation, the need for great digital literacy in government, and more.

Find me on Twitter and connect with the Trust & Safety Professional Association

 
 

Much of your current work lies at the intersection of tech and policy, focusing on issues like disinformation and online safety. How did you first become interested in these issues?

JFK once said, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win.” In a similar way to how JFK committed the nation to the ambitious goal of landing on the moon,I believe the field of “trust and safety” (including online safety / disinformation), is the most difficult challenge of this decade that technology companies, governments, civil society, and internet users must stand united behind.

This is a challenge we need to win. From live-streamed mass shootings, terrorist content, online sexual exploitation and conspiracy theories, to election interference, there are endless problems that require collaboration, starting with strengthened dialogue. I have always been fascinated with how people behave online, and I spend my free time evaluating how people behave on social media. I especially love testing and exploring new technology platforms to understand the new ways in which people interact and connect with one another.

However, I was first exposed to the offline consequences of fake news and misinformation while I was at Microsoft in early 2014. At Microsoft I was managing a partnership program focused on growing the capacity of libraries in Myanmar to serve as trustworthy information hubs. We piloted new platforms and approaches to tackle digital and information challenges in the lead-up to the country’s critical 2015 general elections, the first time the nation could participate in a democratic vote after nearly 50 years of military rule. Over the span of months,I saw firsthand the impact of rampant disinformation as Facebook unintentionally introduced the country to a new digital era that causing offline chaos, violence, and eventually, ethnic genocide.

Years later, I joined the US Government as an Entrepreneur-in-Residence' where I served as the CTO of two teams within the Department of Homeland Security focused on online safety issues, ranging from terrorist use of the internet, to foreign influence operations and election security. Most recently, I worked with human rights defenders, civil society groups, and policy professionals at Mozilla, evaluating regulatory policies and tools around harmful content online. My Passion for problem solving in this space was cultivated by these experiences.

What particular questions/challenges have you been tackling recently, and why is now the right time to address these?

Over the last year, I have been building a new organization to support and advance the trust and safety profession through a shared community of practice. Today the biggest challenge in trust and safety is the lack of formalized training and support for the professionals that determine and enforce acceptable content policies and behavior online.Professionals working in trust and safety teams at technology companies are tasked with adjudicating what is acceptable behavior or content, while also protecting free speech, user safety, and society. Trust and safety teams are also asked to make difficult decisions to protect users, while also having limited support from product and engineering teams to carry out and enforce these policies at scale.

There has never been a better time to support trust and safety professionals and to do their best work. Due to recent media coverage of content moderation, there has been more awareness of the psychological and mental health risks associated with content review that impact employee wellness.

As fake news and misinformation has gone rampant in election processes, governments around the world have threatened regulation or fines for the inability of timely content review and removal. Some countries, like Germany, have introduced and are enforcing such regulations. Hate speech that stays online in Germany for more than 24 hours can accrue a fine of €50 million under Netzwerkdurchsetzungsgesetz. Examples like these have led companies to invest more resources in their trust and safety operations and have more transparency in their decision-making practices.

Furthermore, technology companies are increasingly realizing the impact that toxic users and behavior have on “user churn” and bottom line.

“Advertisers like Unilever have threatened to take their marketing budgets elsewhere if platforms don’t mitigate the spread of toxic content or toxic users.”

Are certain types of misinformation more dangerous than others? What criteria should companies use to evaluate whether a piece of content should be removed from their platform as misinformation?

The perceived danger of misleading content is a key element that influences the way many companies prioritize and investigate removal. Misinformation can include false or misleading information, ranging from rumors, gossip, errors, propaganda. While misinformation can be harmless (i.e married celebrity having an affair), information can become dangerous when the content has offline or behavioral consequences for users and goes viral (i.e. drinking diluted bleach will prevent you from COVID-19 infections).

When evaluating misinformation, companies can evaluate the actor behind the campaign, the behavior of the post (is it a bot network?), and the context of the content to determine how to prioritize removal. Other times, misinformation is automatically detected by AI/machine learning or flagged manually by users reporting the post 

One challenging part of misinformation is when content is further bolstered by recommendation engines and algorithms, originally designed to heighten user engagement but now leaving users in partisan and polarized filter bubbles.

When Someone searches for content reinforcing fake science such as the flat earth theory, they become stuck in a “rabbit hole”of other conspiracy theories unknowingly, of which many include content from the anti vaccine movement. To Counteract this, AI researchers like Guillaume Chaslot have advocated internet platforms to have increased algorithmic transparency.

What are some tough questions in the civic tech world that you haven’t found satisfying answers for yet? What kind of work do you think needs to be done to answer them?

One of the hardest questions in civic tech is “Why is it so hard for cities and governments to upgrade to a modern digital infrastructure and to understand technology?” For starters, the gerontocracy of political leadership has led to a growing digital divide in Capitol Hill, city halls, and federal agencies.

Additionally, outdated legislation often hurts the ability to evaluate the poor implementation and quality of legacy systems and allow them to be more human-centered and agile.For example, the Paperwork Reduction Act of 1995 makes it near-impossible for most digital teams to conduct user-centered research and ask questions about usability. Additionally, most government teams have limited IT budgets, often locked into contracts using legacy systems and run by staff incentivized by their own job security to maintain the status quo.

The ability to recruit talented new technologists is not easy either. For highly sought after software engineers, it is difficult for federal and local governments to match the salary, benefits, and “cool factor” of major technology companies.

For entrepreneurs that are building civic tech solutions, the procurement timeline is often so long or so complex, often startups do not have the runway to survive before the contract is awarded. The continued use of legacy systems is leading to disastrous impacts. With many people going on unemployment due to COVID-19, local governments are struggling to maintain systems due to a scarcity of available COBOL programmers and the volume of demand and usage.COBOL is a 60-year old programming language that many state-level unemployment benefits systems are built on.

To solve this, more companies should allow talented technology employees interested in the public sector to take “civicleave” and go into technology positions in government through programs such as PIF, USDS, Code For America, or Tech Congress to help upgrade outdated technology systems and build products in a more human-centered, data driven, and agile way.

Equivalently, it is just as important for policymakers to spend time in the technology/private sector to understand product and policy challenges from the perspective of companies. Additionally, it is important to support, encourage, and elect more diverse and tech-savvy newcomers to run for office and hold leadership positions.

How do you feel about the current relationship between technology, tech companies, and policymakers? How do you think these relationships will change in the future?

Today there is growing hostility between Silicon Valley technology companies and Washington, especially around critical national security challenges and ethical concerns. Most governments see technology and innovation as key strategic assets for competitive advantage against other countries. However, many developers in the open source community view technology as something meant to be free and open, rather than potentially weaponized for use by the Pentagon.

At Present many technologists struggle to understand international power politics and appropriate actions towards regulation. Reversely, policymakers fail to legislate coherent policies to govern technology companies and protect user privacy and safety.

To change this for the future, we need to encourage more technologists to lend their expertise to support better policy-making, through incubators like Aspen’s Tech Policy Hub. Equivalently, more public policy experts should deepen their product management experience to translate their concerns coherently to Silicon Valley.