Harry and Meghan Join Tech Visionaries in Demanding Prohibition on Advanced AI
The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.
Harry and Meghan are among the signatories of a powerful statement that calls for “a ban on the development of artificial superintelligence”. Superintelligent AI refers to artificial intelligence that would surpass human cognitive abilities in every intellectual area, though such systems remain theoretical.
Key Demands in the Statement
The declaration states that the ban should remain in place until there is “widespread expert agreement” on developing ASI “with proper safeguards” and once “strong public buy-in” has been achieved.
Notable individuals who endorsed the statement include AI pioneer and Nobel Prize recipient Geoffrey Hinton, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; Apple co-founder Steve Wozniak; UK entrepreneur Richard Branson; former US national security adviser; ex-head of state an international leader, and British author Stephen Fry. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, John C Mather, and an economics expert.
Behind the Movement
The statement, aimed at national leaders, tech firms and policy makers, was organized by the FLI organization, a US-based AI safety group that previously called for a hiatus in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made artificial intelligence a global political discussion topic.
Tech Sector Views
In recent months, Meta's CEO, the leader of the social media giant, one of the leading tech companies in the United States, stated that advancement toward superintelligent AI was “now in sight”. However, some experts have argued that talk of ASI reflects market competition among technology firms investing enormous sums on AI recently, rather than the sector being close to achieving any technical breakthroughs.
Potential Risks
However, FLI warns that the prospect of artificial superintelligence being achieved “within the next ten years” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to security threats and even threatening humanity with existential risk. Existential fears about AI center around the possible capability of a system to evade human control and protective measures and initiate events contrary to human interests.
Public Opinion
FLI released a US national poll showing that about 75% of Americans want robust regulation on advanced AI, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or controllable. The poll of American respondents noted that only a small fraction backed the status quo of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the US, including the ChatGPT developer a major AI lab and Google, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an stated objective of their research. While this is one notch below ASI, some specialists also caution it could pose an existential risk by, for example, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.