Western accounts of Artificial Intelligence (AI) in China often paint a dystopian picture of an Orwellian surveillance state. But what really makes industry practitioners in the world's second-largest digital economy tick? 2022 IIASA Young Scientists Summer Program (YSSP) participant Junhua Zhu, wants to lift the veil, taking a deep dive into the Chinese AI ecosystem.
AI is everywhere. Algorithms are not only telling us what to watch or where to eat; they are widely used across all sectors of the economy, including education, healthcare, and manufacturing.
Their meteoric rise has sparked a vivid discussion about the ethical underpinnings of AI development. Among the many buzzwords circulating in the public sphere, the question of AI fairness has garnered increasing scholarly attention in recent years.
Junhua Zhu, YSSP participant and second-year PhD student at the Center for East Asian Studies at the University of Turku in Finland, researches the Chinese AI landscape. Along with the United States, the country is at the forefront of global industry. Yet, the ethical norms and organizational routines guiding AI research and development remain poorly understood in the West, Zhu notes. Based on interviews with algorithm designers and industry practitioners, he aims to gain a more comprehensive understanding of how fairness comes into play in different contexts of AI applications.
Yet, regardless of heightened efforts to uncover and mitigate the many biases baked into AI applications, AI fairness is a notoriously elusive concept. A distinction is often made between procedural fairness (if the procedure a machine or algorithm uses to make a decision is fair) and distributive fairness (if the algorithm leads to fair outcomes) - although these often conflict with one another.
“There are a lot of terms: fairness, justice, equity, and equality, among others. The definition, however, really depends on the industry we are talking about. It means something different in healthcare than in finance or other domains,” explains Zhu.
However, one of the biggest problems is that data subjects largely find themselves left in the dark regarding the algorithms they encounter.
“So far, I don't think application users have a say in what constitutes fairness, and that is problematic. People often assume that the dominant idea of AI fairness in China is completely different from the West. If you however look into the Chinese literature on AI ethics, you will see that privacy is actually the most raised concern regarding AI applications,” he adds.
Nevertheless, things are changing. In the late 2010s, several scandals broke around big-data-driven price discrimination. E-commerce giants like Taobao, often called China’s Amazon, and Trip.com, the country’s most-used travel site, had systematically been showing users (vastly) different prices based on their online behavior. The government subsequently issued several new legislations to rein in the freewheeling tech industry, including making behavior-based discriminatory pricing illegal and crafting a new data protection law (PIPL). Despite evidence that in domains like facial recognition, data collected by Chinese firms are still being used unethically, the new regulatory framework is shifting the world’s most populous country towards more stringent data protection and online privacy provisions.
“If you had asked me about the direction China was going in terms of AI two years ago, I would have been very pessimistic and would have said that the state is trying to push their agenda,” Zhu points out. “Given all the unethical applications of AI, regulation is much needed.”
Undoubtedly, the country’s activities in the AI sphere also have a geopolitical dimension. With the new framework in place, China is joining the ranks of other countries and regions tightening their grip on big tech. While the US still largely adheres to its hands-off model of minimal market intervention, China, alongside Europe, is on track to set global standards regarding AI governance, thereby ending its decades-long dependency on industry standards set by the West and pushing ahead in the global race for digital supremacy. This kind of repositioning, Zhu believes, will ultimately lead to heightened tensions in the international arena.
With his YSSP project, however, he aims to provide food for thought and debate, likening the discussion around AI to online filter bubbles.
He explains, “I don't want to see a decoupled world. I want to provide facts so people can engage in debate. Especially in the virtual world, people tend to meet people that share similar opinions and I think that's dangerous. What I am trying to do is to reverse this kind of tide: putting AI into question and bridging the conversation.”
Note: This article gives the views of the author, and not the position of the Nexus blog, nor of the International Institute for Applied Systems Analysis.