The Instituto de Ciências Sociais (ICS) at the University of Lisbon recently kicked off its academic year with an inagural lecture that I had the honor to deliver. The event focused on the critical theme of “Effective Regulation” in the realm of artificial intelligence, shedding light on the evolving landscape of AI and its impact on society.
The lecture delved into recent developments in the AI domain, highlighting the departure of Geoffrey Hinton from Google to pursue a career as a public speaker. Hinton, an AI pioneer, expressed his commitment to shaping regulatory frameworks that address the inherent risks posed by AI to humanity. Simultaneously, Sam Altman, CEO of OpenAI and a key figure in the advancement of ChatGPT, dedicated significant efforts to engage with policymakers, advocating for what he termed as “effective regulation” of increasingly potent AI tools.
Despite initial skepticism, these endeavors have been viewed by the public as crucial steps towards aligning technology with legal standards. Regulation, as emphasized in the lecture, holds the potential to curb the tech industry’s inclination to “move fast and break things,” a phrase synonymous with the rapid development and deployment of technology without due consideration for its societal impact.
The lecture honed in on the question of what characterizes this regulation, why it is labeled as “effective,” and why influential figures in the tech sector endorse it. By examining the regulation of data, labor, and raw materials for AI, I tried to refocus the sense of threat that derives from phopheties of “existential risks to humanity”