2023 IIASA Young Scientists Summer Program participant, Jalal Awan, explains how reframing artificial intelligence as a common good co-created by global users and stakeholders could help harness its transformative power for the benefit of all.  

Around six decades ago, German philosopher Martin Heidegger questioned the essence of technology and its role in our lives. He suggested that technology wasn’t a mere tool but an ontological entity that shaped our reality and co-evolved with us. This thesis rings eerily true today as artificial intelligence (AI) begins to weave its way into the fabric of our daily lives.

A recent McKinsey & Co. analysis on the impact of generative AI on productivity estimates a future where automation engulfs more than half of our work by 2030, reaching a tipping point by 2045. Foundation models, such as ChatGPT, Stable Diffusion, and Meta’s recently introduced Llama-v2, represent more than just incremental progress. In essence, we are at the onset of the generative AI era; a paradigm shift as profound as the radio and typewriter's transformative effects on 19th-century communication and culture. This ascent of AI coincides with increasing societal discontent towards an economic structure that exacerbates disparities, and a political order that benefits autocrats.

Unfortunately, contemporary debates on AI are myopically focused on its adversarial risks. Nations like the US and China – at the cutting-edge of AI research – have entrenched themselves in a non-cooperative stance on AI. This US-China tug-of-war, swinging between de-risking and decoupling strategies, comes at a cost to crucial advancements for civilian applications. While US export rules tighten, China seeks detours through covert methods and new alliances with nations like Germany and Korea. To facilitate real advancement, a necessary first step may be to delineate military from civilian AI uses, emphasizing cooperation for civilian benefits while enforcing rigorous controls on military applications – akin to the nuclear arms protocols of the 1950s.

Undeniably, AI bears apocalyptic potential – unrestrained autonomous weaponry, misinformation via deepfakes, flawed financial models capable of economic chaos, or a hacking of the human operating system itself, as Harari notes. Yet, while these concerns are valid, the reactions often miss the mark. The pervasive doomsday narrative underscores the need for rigorous, multidisciplinary research – in areas such as AI alignment and privacy-preserving algorithms – rather than inducing fear or advocating for developmental pauses. Imagine if we altered our understanding of AI from a tool of competition to a common good, a collective asset underpinned by the voluminous data contributions from users worldwide. This perspective reframes AI as a collective achievement rather than the domain of a few tech giants or nations, promoting inclusivity, ethical growth, and a focus on shared benefits.

Through the lens of Heidegger's "enframing", we can see generative AI and human-machine symbiosis as saving powers, opening new avenues of interaction with the world, unveiling fresh insights, and stimulating creative solutions to global issues such as climate change. In the context of developing nations, these advancements could be transformative. Large Language Models (LLMs), trained on trillions of tokens, already have the capability to understand natural language, and respond to questions, create images, videos, and guidelines from text or voice-based inputs. Generative AI and computer vision can revolutionize remote farming with personalized advice, financing access, and proactive monitoring. Natural language processing can democratize education by surmounting linguistic barriers, while AI in healthcare can fill gaps in underserved regions. The possibilities are broad and transformative, offering a realistic opportunity to bridge societal inequities.

Securing AI's benefits calls for an ethical intervention, akin to the philosophical guidance given to medical ethics four decades ago, encouraging a multidisciplinary, value-centric approach. The Organisation for Economic Co-operation and Development (OECD)’s values-based AI Principles with fit-for-purpose guardrails for high-risk applications, such as those provided by the US National Institute for Standards and Technology (NIST)'s AI Risk Management Framework, may provide an effective model to build on. A hybrid framework, borrowing from the OECD's AI principles and NIST's risk management, could not only foster cooperative AI development but also curb adversarial use by establishing performance benchmarks and monitoring compliance.

Today's AI landscape is reminiscent of a Heideggerian parable, where we find ourselves intertwined with a powerful technology that both defines us and is defined by us. By embracing the perspective of AI as a force for good, we could steer its development towards mitigating inequities, reducing adversarial risks, and promoting global cooperation. Moreover, broad stakeholder buy-in would also serve to discourage free-ridership, ensuring that everyone contributes data and resources for AI development, while establishing a resilient system of accountability to prevent potential mishaps. Critical to this effort is the engagement of philosophers, ethicists, policymakers, scientists, and the public in an extensive dialogue about AI and its place in our world. As we continue to co-evolve with technology, we must keep in mind that the AI of tomorrow isn't predestined; it is the product of our choices and actions today.

 

Note: This article gives the views of the author, and not the position of the IIASA blog, nor of the International Institute for Applied Systems Analysis.