Existential AI Risk
About a decade ago, the author Nick Bostrom and the Centre for the study of Existential Risk at Cambridge started an esoteric public discussion on the existential risk posed by AI. Now, with the advent of GPT-3 and 4, the threat posed by AI has become exploitable.
Recently, thousands of prominent scientists, tech entrepreneurs, and philosophers have signed an open letter to make the following demand:
we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4
The claim in the letter is that Advanced AI could represent a profound change in the history of life on Earth and inflict the following existential harms:
- Disinformation. Should we let machines flood our information channels with propaganda and untruth?
- Job destruction. Should we automate away all the jobs, including the fulfilling ones?
- AIs take over (and kill us). Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization?
Unsurprisingly, given our experiences with the existential threat posed by terrorism and COVID, the signatories are demanding new control and governance systems, new tracking systems (leakage and computational resources), copyright and disinformation barriers, and new government programs/institutions. While it s unlikely that this letter will result in a pause in development, the invocation of an existential AI threat will likely be used by those who control these systems to justify absolute control of these systems ( only we can keep you safe ).
Unfortunately, this existential threat hype is as unwarranted as the previous examples. Here s why:
via www.patreon.com
John Robb