existential risk

From Wiktionary, the free dictionary
Jump to navigation Jump to search

English

[edit]
An existential risk is a particularly severe type of global catastrophic risk.
Nuclear war is an example of an existential risk.

Etymology

[edit]

The "human extinction" sense was coined by philosopher and writer Nick Bostrom in 2002.[1]

Noun

[edit]

existential risk (countable and uncountable, plural existential risks)

  1. A risk which could destroy or permanently damage an entity; a risk to one's existence.
    • 2019 April 18, Gregory Travis, “How the Boeing 737 Max Disaster Looks to a Software Developer”, in IEEE Spectrum[2]:
      In an industry that relies more than anything on the appearance of total control, total safety, these two crashes pose as close to an existential risk as you can get.
    • 2020 March 30, “Tesla Faces Existential Risks”, in Seeking Alpha[3]:
      I believe TSLA faces existential risk based on what is happening in the world today, and that this recent scare and economic recession will only catalyze further share price decline.
  2. (specifically) A hypothetical future event which could cause human extinction or permanently and severely curtail humanity's potential.
    • 2008, Eliezer Yudkowsky, “Cognitive Biases Potentially Affecting Judgment of Global Risks”, in Nick Bostrom, Milan M. Ćirković, editors, Global Catastrophic Risks, New York: Oxford University Press, →ISBN:
      The scenario of humanity going extinct in the next century is a disjunctive event. It could happen as a result of any of the existential risks we already know about—or some other cause which none of us foresaw.
    • 2013 February, Nick Bostrom, “Existential Risk Prevention as Global Priority”, in Global Policy[4], volume 4, number 1, archived from the original on 8 September 2020:
      But perhaps the strongest reason for judging the total existential risk within the next few centuries to be significant is the extreme magnitude of the values at stake.
    • 2023 May 2, Josh Taylor, Alex Hern, “‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation”, in The Guardian[5], →ISSN:
      The man often touted as the godfather of AI has quit Google, citing concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence.
    • 2023 July 15, George Monbiot, “With our food systems on the verge of collapse, it’s the plutocrats v life on Earth”, in The Guardian[6], →ISSN:
      So why isn’t this all over the front pages? Why, when governments know we’re facing existential risk, do they fail to act?
    • 2023 November 20, Karen Hao, Charlie Warzel, “Inside the Chaos at OpenAI”, in The Atlantic[7]:
      Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno-optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.

References

[edit]
  1. ^ Phil Torres (2015 January 21) “Problems with Defining an Existential Risk”, in Institute for Ethics and Emerging Technologies[1], retrieved 2020-08-31:The general concept has been around for decades, but the term was coined by Nick Bostrom in his seminal 2002 paper [...].