Tech and Democracy Profile: Caitlin Chin

All Tech Is Human’s Tech & Democracy report addresses key issues and best practices in the field, and highlights a diverse range of individuals working in the field (across civil society, government, industry, academia, and entrepreneurship). Similar to all of our reports, this is assembled by a large global working groups across multiple disciplines, backgrounds, and perspectives.

As part of the Tech & Democracy report our team interviewed more than 40 people working to create a brighter tech future. This week, we’ll be highlighting select interviews.

Today, we hear from Fellow, Strategic Technologies Program, Center for Strategic and International Studies (CSIS) Caitlin Chin.To read more profile interviews, click below to download the Tech & Democracy report now.

Q: Tell us about your role and what it entails

As a fellow at the Center for Strategic and International Studies (CSIS) in Washington, D.C., I research the social and political effects of technological change. I analyze legislative and regulatory developments related to digital privacy, antitrust, content moderation, and regularly write and publish articles and reports to convey my conclusions. In addition, I host CSIS panels, roundtables, and podcasts to create a forum for stakeholders from civil society, academia, government, and industry to exchange views on timely technology policy developments. A think tank researcher wears many hats, but my overarching goal is to explore public policy solutions that will increase the fairness, equity, and integrity of emerging technologies for years to come.

Q: What do you think are the key issues at the intersection of technology and democracy?

Democratic institutions are built on certain core principles: free speech, voting rights, and civic engagement, to name just a few. However, the internet has drastically transformed the relationship between citizens and government, creating new challenges. During the 2020 U.S. presidential election cycle, voters were targeted with malicious and deceptive robocalls, false or misleading claims on social media, and other harmful messages designed to either discourage voting or convey unverified claims of election fraud. Other nations around the world—Canada, the United Kingdom, Brazil, and many more—have similarly faced an influx of false claims during recent election cycles.

While harmful or false content has long existed in the past, new technologies have enabled its permeation on a more rapid and widespread scale. Now, democratic nations must figure out what principles, standards, and norms are necessary to counter disinformation in a new digital age. Should there be different content moderation standards for public platforms compared to private channels, where internet users may have varying expectations of privacy? Given concerns about increased concentration in digital markets, should smaller internet platforms with fewer resources face less responsibilities to implement large-scale content moderation systems? The public and private sectors will need to consider these thorny issues or risk facing the drastic consequences of outdated data governance standards: a loss of public trust; physical, economic, or psychological harms to individuals; geopolitical and national security risks; and fractures in the core of the democratic process.

Q: What are the key challenges for democracy that technology can ameliorate?

Technology can create new channels for individuals and society to connect and share ideas online, surpassing traditional geographic limitations. Now, voters can access information about their local elections and candidates online—instead of commuting to in-person events—and volunteer or engage with their communities remotely. Technology can also streamline certain processes. For example, algorithms can automatically flag hate speech and erroneous content. The COVID-19 pandemic helped accelerate an expansion of hybrid or virtual tools, increasing access to certain services—such as telehealth, more flexible work arrangements, and virtual communications—that could allow more individuals to more easily engage with society regardless of location or other physical limitations. 

But the problem is that the benefits of technology are rarely, if ever, equally distributed. For example, the United States still has enormous disparities in access to high-speed broadband and devices by factors like race, income, and location. In addition, the widespread digitalization of everyday activities has normalized data collection for smartphone apps, web browsers, and internet-connected devices, which could create outsized privacy risks for communities that have traditionally been subject to greater surveillance. In short, it is possible that technology could mitigate key societal challenges—but it is necessary to find ways to distribute its benefits more equitably.

Q: What are the responsibilities of government and/or media companies when social technologies are used to exacerbate social tensions, threaten democracy, misinform and destabilize society? How can we hold each of these groups accountable?

Technology platforms should have a responsibility to prevent harm not only to their users, but also to society. There are many ways in which private companies can proactively mitigate any negative risks of their services: enhancing the transparency of their algorithms and content moderation policies; working with civil society and human rights groups to promote fair values; enabling greater user controls to flag content; employing human and automated reviewers; limiting their collection, processing, and sharing of personal information to target content, and more.

In turn, governments can create processes and rules to help clarify the responsibilities of technology platforms and create accountability mechanisms for their actions and outcomes. In particular, governments can create rules to prevent any abuses in data collection and sharing and reduce the possibility of disparate impact stemming from algorithmic bias. However, there is a limit to what governments should do. For example, even though online disinformation is a real problem that needs to be addressed, politicians should not be able to directly order technology platforms to remove content that relates to their political parties or viewpoints, to avoid crossover into censorship.

Importantly, both technology platforms and government institutions should generally aim for transparency when feasible, including by facilitating civil society and journalist insight into ranking and recommendation algorithms, data on paid advertisements, content moderation outcomes, and more. In turn, individuals and the general public should ideally have a certain amount of control over the content that they see online, including by wielding the ability to flag content and appeal content moderation decisions.

Previous
Previous

Tech and Democracy Profile: Mardiya Siba Yahaya

Next
Next

Tech and Democracy Profile: Amy Larsen