Main Article Content
Science and technology have experienced a great transition, a development that has shaped all of humanity. As progress continues, we face major global threats and unknown existential risks even though humankind remains uncertain about how likely unknown risks are to occur. This paper addresses five straightforward questions: (1) How can we best understand the concept of (existential) risks within the broader framework of known and unknown? (2) Are unknown risks worth focusing on? (3) What is already known and unknown about AI-related risks? (4) Can a super-AI collapse our civilisation? Furthermore, (5) how can we deal with AI-related risks that are currently unknown? The paper argues that it is of high priority that more research work be done in the area of ‘unknown risks’ in order to manage potentially unsafe scientific innovations. The paper finally concludes with the plea for public funding, planning and raising a general awareness that the far-reaching future is in our own hands.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Articles in IGJR are being published under the Creative-Commons License "CC 4.0 BY". On the basis of this license, the article may be edited and changed, but the author always has to be credited for the original work. By sending your article to IGJR, you agree to the publication of your article under this license. Please contact us if you do not want to have your article be published under CC 4.0 BY.