Intergenerational Justice Review
https://igjr.org/ojs/index.php/igjr
<h3> </h3>en-USIntergenerational Justice Review2190-6335<p>Articles in IGJR are being published under the Creative-Commons License "CC 4.0 BY". On the basis of this license, the article may be edited and changed, but the author always has to be credited for the original work. By sending your article to IGJR, you agree to the publication of your article under this license. Please contact us if you do not want to have your article be published under CC 4.0 BY.</p>Table of Contents
https://igjr.org/ojs/index.php/igjr/article/view/1226
IGJR Editors
Copyright (c) 2023 Joerg Tremmel, Grace Clover, Markus Rutsche
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1226Richard Fisher: The Long View / Roman Krznaric: The Good Ancestor
https://igjr.org/ojs/index.php/igjr/article/view/1231
<p>Richard Fisher: The Long View: Why We Need to Transform How the World Sees Time </p> <p>Roman Krznaric: The Good Ancestor: How to Think Long Term in a Short-Term World</p>Grace Clover
Copyright (c) 2023 Grace Clover
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1231Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction
https://igjr.org/ojs/index.php/igjr/article/view/1232
<p>Thomas Moynihan: X-Risk: How Humanity Discovered its Own Extinction</p>Kritika Maheshwari
Copyright (c) 2023 Kritika Maheshwari
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1232Human rights and climate risks for future generations: How moral obligations and the non-discrimination principle can be applied
https://igjr.org/ojs/index.php/igjr/article/view/1229
<p>From an ethical point of view, preventing the development of conditions that threaten the existence of future generations is a necessity; but to what extent can this argument be made using the language of human rights? I contend in this article that this language can provide us with arguments for extending greater consideration to the risks we may be imposing on future generations and the need for institutional representation of these generations’ interests. The application of a human rights perspective to issues of future concern enables us to formulate obligations to upcoming generations on the part of current ones. Further, I consider how the point in time in which a person is born represents a (morally wrong) ground for discrimination.</p>Christoph Herrler
Copyright (c) 2023 Christoph Herrler
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1229The post-antibiotic era: An existential threat for humanity
https://igjr.org/ojs/index.php/igjr/article/view/1230
<p>Currently, mankind is facing the risk of running out of working antibiotics. Such a post-antibiotic era bears tremendous risks such as globally spread or even pandemic bacterial infections. These infections become thus untreatable and possibly lethal, particularly endan-gering the health (care) of future generations. This paper discusses this acute concern for humanity in three main steps. After first elaborating on the role of antibiotics and the occur-ring resistance in modern medicine, the focus will be on the current scope of the problem of antibiotics and the prognosis of its future escalation. Then the possibility of a way out and its obstacles will be addressed, before finally assessing the existential threat of a post-antibiotic era.</p>Dominik KoeslingClaudia Bozzaro
Copyright (c) 2023 Dominik Koesling, Claudia Bozzaro
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1230Unknown risks and the collapse of human civilisation: A review of the AI-related scenarios
https://igjr.org/ojs/index.php/igjr/article/view/1228
<p>Science and technology have experienced a great transition, a development that has shaped all of humanity. As progress continues, we face major global threats and unknown existential risks even though humankind remains uncertain about how likely unknown risks are to occur. This paper addresses five straightforward questions: (1) How can we best understand the concept of (existential) risks within the broader framework of known and unknown? (2) Are unknown risks worth focusing on? (3) What is already known and unknown about AI-related risks? (4) Can a super-AI collapse our civilisation? Furthermore, (5) how can we deal with AI-related risks that are currently unknown? The paper argues that it is of high priority that more research work be done in the area of ‘unknown risks’ in order to manage potentially unsafe scientific innovations. The paper finally concludes with the plea for public funding, planning and raising a general awareness that the far-reaching future is in our own hands.</p>Augustine U. Akah
Copyright (c) 2023 Augustine U. Akah
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1228Editorial
https://igjr.org/ojs/index.php/igjr/article/view/1227
Jörg TremmelGrace CloverMarkus Rutsche
Copyright (c) 2023 Jörg Tremmel, Grace Clover, Markus Rutsche
http://creativecommons.org/licenses/by/4.0
2023-08-312023-08-318210.24357/igjr.8.2.1227