+ All Categories
Home > Science > Roadmaps to prevent x risks 2- presentation

Roadmaps to prevent x risks 2- presentation

Date post: 18-Feb-2017
Category:
Upload: avturchin
View: 1,611 times
Download: 0 times
Share this document with a friend
66
A Roadmap: Plan of action to prevent human extinction risks Alexey Turchin Existen1al risks Ins1tute h5p://existrisks.org/
Transcript
Page 1: Roadmaps to prevent x risks 2- presentation

A Roadmap: Plan of action to prevent human extinction risks

Alexey  Turchin  

Existen1al  risks  Ins1tute  

h5p://existrisks.org/

Page 2: Roadmaps to prevent x risks 2- presentation

X-risksExponential growth of technologies

Unlimited possibilities

Including destruction of humanity

Page 3: Roadmaps to prevent x risks 2- presentation

Main x-risks

AI

Synthetic biology

Nuclear doomsday weapons and large scale nuclear war

Runaway global warming

Nanotech - grey goo.

Page 4: Roadmaps to prevent x risks 2- presentation

CrowdsoursingContest: more than 50 ideas were added

David Pearce and Sakoshi Nakamoto contributed

Page 5: Roadmaps to prevent x risks 2- presentation

3ODQ�Ǩ�

International Control System

3ODQ�Ǩ�Friendly AI

Plan BSurvive

the catastrophe

3ODQ�ǹLeave

Backups

Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • Teaching rationality (LessWrong) • Slowing other AI projects (recruiting scientists) • FAI free education, starter packages in programming

Solid Friendly AI theory

• Human values theory and decision theory • Proven safe, fail-safe, intrinsically safe AI • Preservation of the value system during AI self-improvement • A clear theory that is practical to implement

Seed AICreation of a small AI capable of recursive self-improvement and based on Friendly AI theory

Superintelligent AI• Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth• AI eliminates suffering, involuntary death, and existential risks• AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel)

Preparation • Fundraising and promotion • Textbook to rebuild civilization (Dartnell’s book “Knowledge”) • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities

Building

• Underground bunkers, space colonies • Nuclear submarines • Seasteading

Natural refuges

• Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Antarctica

Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes

Time capsules with information

• Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks)

Preservation of earthly life

• Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats)

Prevent x-risk research because it only increases risk

• Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob-al catastrophe it maybe unreasonable to try to change the probability • Do nothing

Plan of Action to Prevent Human Extinction Risks

Singleton • “A world order in which there is a single decision-making agency at the highest level” (Bostrom)

• Worldwide government system based on AI • Super AI which prevents all possible risks and provides immortality and happiness to humanity

• Colonization of the solar system, interstellar travel and Dyson spheres • Colonization of the Galaxy

• Exploring the Universe

Controlled regression

• Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science ��$QWLJOREDOL]P�DQG�GLYHUVL¿FDWLRQ�LQWR�PXOWLSRODU�ZRUOG

Global catastrophe

Plan D

Improbable Ideas

Messages to ET civilizations

• Interstellar radio messages with encoded human DNA • Hoards on the Moon, frozen brains ����9R\DJHU�VW\OH�VSDFHșUDIWV�ZLWK�LQIRUPDWLRQ� about humanity

Quantum immortality• If the many-worlds interpretation of QM is true, an observer will survive any sort of death including any global catastrophe (Moravec, Tegmark)• It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine)��$QRWKHU�KXPDQ�FLYLOL]DWLRQV�PXVW�H[LVW�LQ�WKH�LQ¿QLWH�8QLYHUVH

Unfriendly AI • Kills all people and maximizes non-human values (paperclip maximiser) • People are alive but suffer extensively

Reboot of the civilization

• Several reboots may happen • Finally there will be total collapse or a new supercivilization level

Interstellar distributed

humanity • Many unconnected human civilizations • New types of space risks (space wars, planets and stellar explosions, AI and nanoreplicators, ET civilizations)

Attracting good outcome by positive thinking

• Preventing negative thoughtsabout the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind FRQWURO�WR�¿QG�SRWHQWLDO�WHUURULVWV�DQG�superpowers to stop them • Start partying now

First quarter of 21 century (2015 – 2025)

High-speed tech development needed to quickly pass

risk window • Investment in super-technologies (nanotech, biotech) • High speed technical progress helps to overcome slow process of resource depletion • Invest more in defensive technologies than in offensive

Improving human intelligence and

morality • Nootropics, brain stimulation, and gene therapy for higher IQ • New rationality: Bayes probabilities, interest in long term future, LessWrong����(GXFDWLRQ��DQG�¿JKWLQJ�FRJQLWLYH�ELDVHV� • High empathy for new geniuses is needed to prevent them from becoming superterrorists • Many rational, positive, and cooperative people are needed to reduce x-risks • Lower proportion of destructive beliefs, risky behaviour, ����DQG�VHO¿VKQHVV • Engineered enlightenment: use brain science to make people more unite, less aggressive; open realm of spiritual world to everybody • Prevent worst forms of capitalism: desire to short term money reward, and to change rules for getting it. Stop greedy monopolies

Saved by non-human intelligence

• Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death

Timely achievement of immortality on highest possible level

• Nanotech-based immortal body����'LYHUVL¿FDWLRQ�RI�KXPDQLW\�LQWR�VHYHUDO�VXFFHVVRU�VSHFLHV�FDSDEOH�RI�OLYLQJ�LQ�VSDFH • Mind uploading • Integration with AI

Miniaturization for survival and invincibility• Earth crust colonization by miniaturized nano-tech bodies• Moving into simulated world inside small self sustained computer

3ODQ�Ǩ�Rising

Resilience

AI based on uploading of its creator

• Friendly to the value system of its creator • Its values consistently evolve during its self-improvement • Limited self-improvement may solve friendliness problem

Bad plans

Depopulation

• Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Birth control (Bill Gates) • Deliberate small catastrophe (bio-weapons)

Unfriendly AI may be better than nothing

• Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some humanvalues and distribute them through the Universe

Strange strategy to escape

Fermi paradoxRandom strategy may help us to escape some dangers that killed all previous civilizations in space

Interstellar travel • “Orion” style, nuclear powered “generation ships” with colonists • Starships which operate on new physical principles with immortal people on board • Von Neumann self-replicating probes with human embryos

Temporary asylums in space

• Space stations as temporary asylums (ISS) • Cheap and safe launch systems

Colonisation of the Solar system• Self-sustaining colonies on Mars and large asteroids• Terraforming of planets and asteroids using self-replicating robots and building space colonies there• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud

3ODQ�Ǩ�Space

Colonization

Technological precognition

• Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines• Search for potential terrorists using new scanning technologies• Special AI to predict and prevent new x-risks

Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI

Resurrection by another civilization

• Creation of a civilization which has a lot of common values and traits with humans • Resurrection of concrete people

Risk control Technology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new �VWUDLQV�RI�ÀX����)UHH]LQJ�SRWHQWLDOO\�GDQJHURXV�SURMHFWV�IRU����\HDUV • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction �'1$�RI�YLUXVHV��RU�ZKLWH�QRLVH�LQ�WKH�¿HOG��LQWHUQHW�FHQVRUVKLS • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientistsTechnology speedup • Differential technological development: develop safety and con-WURO�WHFKQRORJLHV�¿UVW��%RVWURP� • Laws and economic stimulus (Richard Posner, carbon emissions trade)Surveillance • International control systems (like IAEA) • Internet control

Worldwide risk prevention authority

• Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to anticipated existential risks” (Bostrom) • Center for quick response to any emerging risk; x-risks police���3HDFHIXO�XQL¿FDWLRQ�RI�WKH�SODQHW�EDVHG�RQ� a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards

Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Bio-shield – worldwide immune system • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth

Research• Create, prove and widely promotelong term future model (this map is based on exponential tech development future model)• Invest in long term prediction studies

• Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks

• Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated• Solve the problem that differentscientists ignore each other (“world saviours arrogance” problem)• Integrate different lines of thinking about x-risks• Lower the barriers of entry into the ¿HOG�RI�DVVLVWDQFH�LQ�[�ULVN• Unconstrained funding of x-risksresearch for many different approaches��+HOS�EHVW�VFLHQWLVWV�LQ�WKH�¿HOG�(Bostrom) to create high quality x-risk research • Study existing system of decision mak-ing in UN, hire a lawyer• Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by Carana)• General theory of safety and risks prevention

3ODQHW�XQL¿FDWLRQ�ZDU • War for world domination • One country uses bioweapons to kill all world’s population except its own which is immunized • Use of a super-technology (like nanotech) to quickly gain global military dominance • Blackmail by Doomsday Machine for world domination

Global catastrophe

Fatal mistakes in world control system

• Wrong command makes shield system attack people• Geo-engineering goes awry• World totalitarianism leads to catastrophe

Social support����&RRSHUDWLYH�VFLHQWL¿F�FRPPXQLW\� with shared knowledge and productive rivalry (CSER) • Popularization (articles, books, forums, media): show public that x-risks are real and diverse, but we could and should act. • Public support, street action ������ȈQWL�QXFOHDU�SURWHVWV�LQ���V� • Political support (parties)����6FLHQWL¿F�DWWULEXWHV�� peer-reviewed journal, conferences, intergovernment panel, international institute • Political parties for x-risks prevention • Productive cooperation between scientists and society based on trust

Elimination of certain risks

• Universal vaccines, UV cleaners • Asteroid detection (WISE) • Transition to renewable energy, cutting emissions, carbon capture • Stop LHC, SETI and METI until AI will be created • Rogue countries integrated, or crashed or prevented from having dangerous weapons programs and ad-vanced science

International cooperation

• All states in the world contribute to the UN to ¿JKW�VRPH�JOREDO�ULVNV�• States cooperate directly without UN• Group of supernations takes responsibility of x-risks prevention (US, EU, BRICS) and sign a treaty• A smaller catastrophe could help unite hu-manity (pandemic, small asteroid, local nuclear war) - some movement or event that will cause a paradigmatic change so that humanity becomes more existentially-risk aware• International law about x-risks which will pun-ish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people IRU�ORZHULQJ�[�ULVNV��¿QGLQJ�QHZ�ULVNV��DQG�IRU�HI-forts in their prevention.• The ability to quickly adapt to new risks and envision them in advance.• International agencies dedicated to certain risks

Values transformation

• Rise public desire for life extension and global security • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Change the model of the future • “A moral case can be made that existential risk reduction is strictly more im-portant than any other global public good” (Bostrom)• The value of the indestructibility of WKH�FLYLOL]DWLRQ�EHFRPHV�¿UVW�SULRULW\�RQ�DOO�OHYHOV��LQ�HGXFDWLRQ��RQ�SHUVRQDO�level and as a goal of every nation• “Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state” (Bostrom)• Movies, novels and other works of art that honestly depict x-risks and moti-vate to their prevention• Translate best x-risks articles and books on main languages• Memorial and awareness days: Earth day, Petrov day, Asteroid day��$UWL¿FLDO�HQOLJKWHQPHQW�• Rationality and effective altruism• Education in schools on x-risks, safety and rationality topics

Global catastrophe

Manipulation of the extinction probability

using Doomsday argument

• Decision to create more observers in case of unfavourable event X starts to happen, so lowering its probability (method UN++ by Bostrom)• Lowering the birth density to get more time for the civilization

Control of the simulation (if we are in it)

• Live an interesting life so our simulation isn’t switched off• Don’t let them know that we know we live in simulation • Hack the simulation and control it• Negotiation with the simulators or pray for help

Second half of the 21 century (2050-2100)

AI practical studies• Narrow AI• Human emulations• Value loading• FAI theory promotion to most AI commands; they agree to implement it and adapt it to their systems• Tests of FAI on non self-improving models

Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos

Improving sustainability of civilization

• Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc

Practical steps to cope with certain

risks • Instruments to capture radioactive dust • Small bio-hack groups around the world could im-prove the biotechnology understanding of the public to the point where no one accidentally creates a bio-technology hazard• Developing better guidance on safe biotechnology processes and exactly why its safe this way and not otherwise... effectively “raising the sanity waterline” EXW�VSHFL¿F�WR�WKH�DUHD�RI�ELRWHFKQRORJ\�ULVNV• Dangerous forms of BioTech become centralised • DNA synthesizers are constantly connected to the Internet and all DNA is checked for dangerous frag-ments• Funding things like clean-rooms with negative pres-sure• Methane and CO2 capture, may be by bacteria• Invest in bio-divercity of food supply chain, prevent pest spread: 1) better carantine law, 2) port-DEOH�HTXLSPHQW�IRU�LQVWDQW�LGHQWL¿FDWLRQ�RI�DOLHQ�LQFOX-sions in medium bulks of foodstuffs, DQG����IXUWKHU�GHYHORSPHQW�RI�QRQFKHPLFDO�ZD\V�RI�sterilization.• “Develop and stockpile new drugs and vaccines, monitor biological agents and emerging diseases, and strengthen the capacities of local health systems to respond to pandemics” (Matheny)• Better quarantine laws• Mild birth control (female education)

&ROG�ZDU�DQG�::�prevention

��,QWHUQDWLRQDO�FRQÀLFW�PDQDJHPHQW�DXWKRULW\�OLNH�international court or secret institution• Large project which could unite humanity, like pandemic prevention• Rogue countries integrated, based on dialogue and appreciating their values��,QWHUQDWLRQDO�ODZ�DV�EHVW�LQVWUXPHQW�RI�FRQÀLFW�VROYLQJ• Peaceful integration of national states

Dramatic social changesIt could include many interesting but different topics: demise of capi-talism, hipster revolution, internet connectivity, global village, dissolv-ing of national states. Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about sys-tems.

Steps and

timeline Exact dates depend on

the pace of technological progress and of our activity to prevent risks

Step 1Planning

Step 3First level of defence

on low-tech level

Step 4Second level of defence

on high-tech level

Step 5Reaching indestructibility

of the civilization with near zero x-risks

Step 2Preparation

Second quarter of 21 century (2025-2050)

High-tech bunkers • Nano-tech based adaptive structures • Small AI with human simulation inside space rock

Plan

APr

even

t th

e ca

tast

roph

e

Space colonies on large planets

• Creation of space colonies on the Moon and Mars (Elon Musk) ���ZLWK����������SHRSOH

Ideological payload of new technologies

Space tech - Mars as backupElectric car - sustainabilitySelf-driving cars - risks of AI and value of human lifeFacebook - empathy and mutual controlOpen source - transparencyPsychedelic drugs - empathyComputer games and brain stimulation - virtual world

Design new monopoly tech with special ideological payload

Reactive and Proactive approach

Reactive: React on most urgent and most visible risks. Pros: Good timing, visible re-sults, right resource allocation, invest only in real risksGood for slow risks, like pandemic.Cons: can’t react on fast emergencies (AI, asteroid, collider failure). Risk are ranged by urgency.

Proactive: Envision future risks and build multilevel defence.Pros: Good for coping this principally new risks, enough time to build defence.Cons��,QYHVWPHQW�LQ�¿JKWLQJ�ZLWK�imaginable risks, no clear reward, problem with identifying new risks and dis-counting mad ideas.Risks are ranged by probability.

3ODQ�Ǩ���Decentralized monitoring

of risks

Decentralized risks monitoring

• Transparent society: everybody could monitor everybody • Decentralized control (local police, net-solutions,) local police could handle local crime and WHUURULVWV��ORFDO�KHDOWK�DXWKRULWLHV�FRXOG�¿QG�DQG�SUHYHQW�GLVHDVH�VSUHDG��,I�ZH�KDYH�PDQ\�x-risks peers, they could control their neighbourhood in their professional space; Anonymus hacker groups; google search control• “Anonymous” style hacker groups: large group of “world saviours” ��1HLJKERXU�ZDWFK���HDFK�VFLHQWLVW�LQ�GDQJHURXV�¿HOG�ORRNV�IRU�VHYHUDO�KLV�IULHQGV��0RQLWRULQJ�RI�VPRNH��QRW�¿UH���VHDUFK�SUHGLFWRUV�RI�GDQJHURXV�DFWLYLW\• Net based safety solutions• Laws and economic stimulus (Richard Posner, carbon emissions trade): prizes for any risk found and prevented• World democracy based on internet voting• Mild monitoring of potential signs of dangerous activity (internet), using deep learning• High level of horizontal connectivity between people

Useful ideas to limit catastrophe scale

Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. Carnitine, improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeas-ure.

Increase time available for preparation by improving monitor-ing and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and respond-ed to more quickly

Worldwide x-risk prevention exercises

Page 6: Roadmaps to prevent x risks 2- presentation

Plan  A  is  complex:

We  should  do  all  it  simultaneously:  international  control,  AI,  robustness,  and  space

3ODQ�Ǩ���

International Control System

3ODQ�Ǩ�Friendly AI

3ODQ�Ǩ�Rising

Resilience

3ODQ�Ǩ�Space

Colonization

3ODQ�Ǩ���Decentralized monitoring

of risks

Page 7: Roadmaps to prevent x risks 2- presentation

Plans  B,  C  and  D  have  smaller  chances  on  success

3ODQ�ǪSurvive

the catastrophe

3ODQ�ǹLeave

backups

Plan DDeus ex machina

Badplans

Page 8: Roadmaps to prevent x risks 2- presentation

Plan A1.1 International Control System

Risk control Technology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new �VWUDLQV�RI�ÀX� • Freezing potentially dangerous projects for 30 years • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction �'1$�RI�YLUXVHV��RU�ZKLWH�QRLVH�LQ�WKH�¿HOG��LQWHUQHW�FHQVRUVKLS • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientistsTechnology speedup • Differential technological development: develop safety and con-WURO�WHFKQRORJLHV�¿UVW��%RVWURP� • Laws and economic stimulus (Richard Posner, FDUERQ�HPLVVLRQV�WUDGH�Surveillance���,QWHUQDWLRQDO�FRQWURO�V\VWHPV��OLNH�,$($� • Internet control

Worldwide risk prevention authority

• Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to DQWLFLSDWHG�H[LVWHQWLDO�ULVNV´��%RVWURP� • Center for quick response to any ����HPHUJLQJ�ULVN��x-risks police���3HDFHIXO�XQL¿FDWLRQ�RI�WKH�SODQHW�EDVHG�RQ� a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards

Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators���%LR�VKLHOG�±�ZRUOGZLGH�LPPXQH�V\VWHP • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth

Research• Create, prove and widely promotelong term future model (this map is based on exponential tech development IXWXUH�PRGHO�• Invest in long term prediction studies

• Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks

• Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated• Solve the problem that differentscientists ignore each other �³ZRUOG�VDYLRXUV�DUURJDQFH´�SUREOHP�• Integrate different lines of thinking about x-risks• Lower the barriers of entry into the ¿HOG�RI�DVVLVWDQFH�LQ�[�ULVN• Unconstrained funding of x-risksresearch for many different approaches��+HOS�EHVW�VFLHQWLVWV�LQ�WKH�¿HOG��%RVWURP��WR�FUHDWH�KLJK�TXDOLW\�[�ULVN�research • Study existing system of decision mak-ing in UN, hire a lawyer• Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by &DUDQD�• General theory of safety and risks prevention

Social support����&RRSHUDWLYH�VFLHQWL¿F�FRPPXQLW\� with shared knowledge �����DQG�SURGXFWLYH�ULYDOU\��&6(5� • Popularization (articles, books, �����IRUXPV��PHGLD���VKRZ�SXEOLF�WKDW x-risks are real and diverse, but we could and should act. • Public support, street action ������ȈQWL�QXFOHDU�SURWHVWV�LQ���V�����3ROLWLFDO�VXSSRUW��SDUWLHV�����6FLHQWL¿F�DWWULEXWHV�� peer-reviewed journal, conferences, intergovernment panel, international institute • Political parties for x-risks prevention • Productive cooperation between scientists and society based on trust

International cooperation

• All states in the world contribute to the UN to ¿JKW�VRPH�JOREDO�ULVNV�• States cooperate directly without UN• Group of supernations takes responsibility of [�ULVNV�SUHYHQWLRQ��86��(8��%5,&6��DQG�VLJQ�D�treaty• A smaller catastrophe could help unite hu-manity (pandemic, small asteroid, local nuclear ZDU����VRPH�PRYHPHQW�RU�HYHQW�WKDW�ZLOO�FDXVH�D�paradigmatic change so that humanity becomes more existentially-risk aware• International law about x-risks which will pun-ish people for rising risk: underestimating it, plotting it, risk neglect, as well as reward people IRU�ORZHULQJ�[�ULVNV��¿QGLQJ�QHZ�ULVNV��DQG�IRU�efforts in their prevention.• The ability to quickly adapt to new risks and envision them in advance.• International agencies dedicated to certain risks

Page 9: Roadmaps to prevent x risks 2- presentation

Research• Create, prove and widely promotelong term future model (this map is based on exponential tech development future model)• Invest in long term prediction studies

• Comprehensive list of risks • Probability assessment • Prevention roadmap • Determinate most probable and most easy to prevent risks • Education of “world saviors”: choose best students, provide them courses, money and tasks

• Create x-risks wiki and x-risks internet forum which could be able to attract best minds, but also open to everyone and well moderated• Solve the problem that differentscientists ignore each other (“world saviours arrogance” problem)• Integrate different lines of thinking about x-risks• Lower the barriers of entry into the ¿HOG�RI�DVVLVWDQFH�LQ�[�ULVN• Unconstrained funding of x-risksresearch for many different approaches��+HOS�EHVW�VFLHQWLVWV�LQ�WKH�¿HOG�(Bostrom) to create high quality x-risk research • Study existing system of decision mak-ing in UN, hire a lawyer• Integration of existing plans to prevent concrete risks (e.g. “Climate plan” by Carana)• General theory of safety and risks prevention

Step 1Planning

Plan A1.1 International

Control System

Page 10: Roadmaps to prevent x risks 2- presentation

Social support����&RRSHUDWLYH�VFLHQWL¿F�FRPPXQLW\� with shared knowledge �����DQG�SURGXFWLYH�ULYDOU\��&6(5�����3RSXODUL]DWLRQ��DUWLFOHV��ERRNV�������IRUXPV��PHGLD���VKRZ�SXEOLF�WKDW�����[�ULVNV�DUH�UHDO�DQG�GLYHUVH�������EXW�ZH�FRXOG�DQG�VKRXOG�DFW�����3XEOLF�VXSSRUW��VWUHHW�DFWLRQ�������ȈQWL�QXFOHDU�SURWHVWV�LQ���V�����3ROLWLFDO�VXSSRUW��SDUWLHV�����6FLHQWL¿F�DWWULEXWHV�����SHHU�UHYLHZHG�MRXUQDO��FRQIHUHQFHV�����LQWHUJRYHUQPHQW�SDQHO��LQWHUQDWLRQDO institute • Political parties for x-risks prevention����3URGXFWLYH�FRRSHUDWLRQ�EHWZHHQ����VFLHQWLVWV�DQG�VRFLHW\�EDVHG�RQ�WUXVW

Step 2Preparation

Plan A1.1 International

Control System

Page 11: Roadmaps to prevent x risks 2- presentation

International cooperation

• All states in the world contribute to the UN to ¿JKW�VRPH�JOREDO�ULVNV�• States cooperate directly without UN��*URXS�RI�VXSHUQDWLRQV�WDNHV�UHVSRQVLELOLW\�RI�[�ULVNV�SUHYHQWLRQ��86��(8��%5,&6��DQG�VLJQ�D�threaty����$�VPDOOHU�FDWDVWURSKH�FRXOG�KHOS�XQLWH�KX-PDQLW\��SDQGHPLF��VPDOO�DVWHURLG��ORFDO�QXFOHDU�ZDU����VRPH�PRYHPHQW�RU�HYHQW�WKDW�ZLOO�FDXVH�D�SDUDGLJPDWLF�FKDQJH�VR�WKDW�KXPDQLW\�EHFRPHV�PRUH�H[LVWHQWLDOO\�ULVN�DZDUH��,QWHUQDWLRQDO�ODZ�DERXW�[�ULVNV�ZKLFK�ZLOO�SXQ-LVK�SHRSOH�IRU�ULVLQJ�ULVN��XQGHUHVWLPDWLQJ�LW��SORWWLQJ�LW��ULVN�QHJOHFW��DV�ZHOO�DV�UHZDUG�SHRSOH�IRU�ORZHULQJ�[�ULVNV��¿QGLQJ�QHZ�ULVNV��DQG�IRU�HIIRUWV�LQ�WKHLU�SUHYHQWLRQ���7KH�DELOLW\�WR�TXLFNO\�DGDSW�WR�QHZ�ULVNV�DQG�HQYLVLRQ�WKHP�LQ�DGYDQFH���0LOG�PRQLWRULQJ�RI�SRWHQWLDO�VLJQV�RI�GDQJHURXV�DFWLYLW\��LQWHUQHW���XVLQJ�GHHS�OHDUQLQJ��,QWHUQDWLRQDO�DJHQFLHV�GHGLFDWHG�WR�FHUWDLQ�ULVNV

Step 3First level of defence

on low-tech level

Plan A1.1 International

Control System

Page 12: Roadmaps to prevent x risks 2- presentation

Risk control Technology bans • International ban on dangerous technologies or voluntary relinquishment (such as not creating new �VWUDLQV�RI�ÀX� • Freezing potentially dangerous projects for 30 years • Lowering international confrontation • Lock down all risk areas beneath piles of bureaucracy, paperwork, safety requirements • Concentrate all bio, nano and AI research in several controlled centers • Control over dissemination of knowledge of mass destruction �'1$�RI�YLUXVHV��RU�ZKLWH�QRLVH�LQ�WKH�¿HOG��LQWHUQHW�FHQVRUVKLS • Control over suicidal or terrorist ideation in bioscientists • Investments in general safety: education, facilities, control, checks, exams • Limiting and concentrating in one place an important resource: dangerous nuclear materials, high level scientistsTechnology speedup • Differential technological development: develop safety and con-WURO�WHFKQRORJLHV�¿UVW��%RVWURP� • Laws and economic stimulus (Richard Posner, FDUERQ�HPLVVLRQV�WUDGH�Surveillance���,QWHUQDWLRQDO�FRQWURO�V\VWHPV��OLNH�,$($����7UDQVSDUHQW�VRFLHW\��'DYLG�%ULQ�����'HFHQWUDOL]HG�FRQWURO��ORFDO�SROLFH��QHW�VROXWLRQV���ORFDO�SROLFH�FRXOG�KDQGOH�ORFDO�FULPH�DQG�WHUURULVWV��ORFDO�KHDOWK�DXWKRULWLHV�FRXOG�¿QG�DQG�SUHYHQW�GLVHDVH�VSUHDG��,I�ZH�KDYH�PDQ\�[�ULVNV�SHHUV��WKH\�FRXOG�FRQWURO�WKHLU�QHLJKERXUKRRG�LQ�WKHLU�SURIHVVLRQDO�VSDFH��$QRQ\PXV�KDFNHU�JURXSV��JRRJOH�VHDUFK�FRQWURO

Step 3First level of defence

on low-tech level

Plan A1.1 International

Control System

Page 13: Roadmaps to prevent x risks 2- presentation

Worldwide risk prevention authority

• Worldwide video-surveillance and control • “The ability when necessary to mobilise a strong global coordinated response to anticipated existential risks” (Bostrom) • Center for quick response to any emerging risk; x-risks police���3HDFHIXO�XQL¿FDWLRQ�RI�WKH�SODQHW�EDVHG�RQ� a system of international treaties • Robots for emergency liquidation of bio-and nuclear hazards

Step 4Second level of defence

on high-tech level

Plan A1.1 International

Control System

Page 14: Roadmaps to prevent x risks 2- presentation

Active shields • Geoengineering against global warming • Worldwide missile defence and anti-asteroid shield • Nano-shield – distributed system of control of hazardous replicators • Bio-shield – worldwide immune system • Mind-shield - control of dangerous ideation by the means of brain implants • Merger of the state, Internet and worldwide AI in uniform monitoring, security and control system • Isolation of risk sources at great distance from Earth

Step 4Second level of defence

on high-tech level

Plan A1.1 International

Control System

Page 15: Roadmaps to prevent x risks 2- presentation

3ODQHW�XQL¿FDWLRQ�ZDU����:DU�IRU�ZRUOG�GRPLQDWLRQ�����2QH�FRXQWU\�XVHV�ELRZHDSRQV�WR�NLOO�DOO�ZRUOG¶V�SRSXODWLRQ�H[FHSW�LWV�RZQ�ZKLFK�LV�LPPXQL]HG����8VH�RI�D�VXSHU�WHFKQRORJ\��OLNH�QDQRWHFK��WR�TXLFNO\�JDLQ�JOREDO�PLOLWDU\�GRPLQDQFH�����%ODFNPDLO�E\�'RRPVGD\�0DFKLQH�IRU�ZRUOG�GRPLQDWLRQ

*OREDO�FDWDVWURSKH

)DWDO�PLVWDNHV�LQ�ZRUOG�FRQWURO�V\VWHP

��:URQJ�FRPPDQG�PDNHV�VKLHOG�V\VWHP������DWWDFN�SHRSOH��*HR�HQJLQHHULQJ�JRHV�DZU\��:RUOG�WRWDOLWDULDQLVP�OHDGV�WR�FDWDVWURSKH

*OREDO�FDWDVWURSKH

Risks of plan A1.1

Page 16: Roadmaps to prevent x risks 2- presentation

Singleton • “A world order in which there is a single decision-making agency at the highest level” (Bostrom)

• Worldwide government system based on AI • Super AI which prevents all possible risks and provides immortality and happiness to humanity

• Colonization of the solar system, interstellar travel and Dyson spheres • Colonization of the Galaxy

• Exploring the Universe

Result:

Page 17: Roadmaps to prevent x risks 2- presentation

Plan A1.2 Decentralised risk monitoring

Improving human

intelligence and

morality

• Nootropics, brain stimulation, and gene therapy for

higher IQ

• New rationality: Bayes probabilities, interest in long term

future, LessWrong

����(GXFDWLRQ��DQG�¿JKWLQJ�FRJQLWLYH�ELDVHV� • High empathy for new geniuses is

needed to prevent them from becoming superterrorists

• Many rational, positive, and cooperative

people are needed to reduce x-risks

• Lower proportion of destructive beliefs, risky behaviour,

����DQG�VHO¿VKQHVV • Engineered enlightenment: use brain science to make

people more unite, less aggressive; open realm of

spiritual world to everybody

• Prevent worst forms of capitalism: desire to short term

money reward, and to change rules for getting it.

Stop greedy monopolies

Values

transformation

• Rise public desire for life extension and global security

• Reduction of radical religious (ISIS) or nationalistic values

• Popularity of transhumanism

• Change the model of the future

• “A moral case can be made that existential risk reduction is strictly more im-

portant than any other global public good” (Bostrom)

• The value of the indestructibility of

WKH�FLYLOL]DWLRQ�EHFRPHV�¿UVW�SULRULW\�RQ�DOO�OHYHOV��LQ�HGXFDWLRQ��RQ�SHUVRQDO�OHY-el and as a goal of every nation

• “Sustainability should be reconceptualised in dynamic terms, as aiming for a

sustainable trajectory rather than a sustainable state” (Bostrom)

• Movies, novels and other works of art that honestly depict x-risks and moti-

vate to their prevention

• Translate best x-risks articles and books on main languages

• Memorial and awareness days: Earth day, Petrov day, Asteroid day

��$UWL¿FLDO�HQOLJKWHQPHQW�• Rationality and effective altruism

• Education in schools on x-risks, safety and rationality topics

Cold war and WW3

prevention

��,QWHUQDWLRQDO�FRQÀLFW�PDQDJHPHQW�DXWKRULW\�OLNH�international court or secret institution

• Large project which could unite humanity, like pandemic prevention

• Rogue countries integrated, based on dialogue and appreciating their

values

��,QWHUQDWLRQDO�ODZ�DV�EHVW�LQVWUXPHQW�RI�FRQÀLFW�VROYLQJ• Peaceful integration of national states

Dramatic social changes

It could include many interesting but different topics: demise of capi-

talism, hipster revolution, internet connectivity, global village, dissolv-

ing of national states.

Changing the way that politics works so that the policies implemented

actually have empirical backing based on what we know about sys-

tems.

Decentralized risks

monitoring

• Transparent society: everybody could monitor everybody

• Decentralized control (local police, net-solutions,) local police could handle local crime and

WHUURULVWV��ORFDO�KHDOWK�DXWKRULWLHV�FRXOG�¿QG�DQG�SUHYHQW�GLVHDVH�VSUHDG��,I�ZH�KDYH�PDQ\�x-risks peers, they could control their neighbourhood in their professional space; Anonymus

hacker groups; google search control

• “Anonymous” style hacker groups: large group of “world saviours”

��1HLJKERXU�ZDWFK���HDFK�VFLHQWLVW�LQ�GDQJHURXV�¿HOG�ORRNV�IRU�VHYHUDO�KLV�IULHQGV��0RQLWRULQJ�RI�VPRNH��QRW�¿UH���VHDUFK�SUHGLFWRUV�RI�GDQJHURXV�DFWLYLW\• Net based safety solutions

• Laws and economic stimulus (Richard Posner,

carbon emissions trade): prizes for any risk found and prevented

• World democracy based on internet voting

• Mild monitoring of potential signs of dangerous activity (internet), using deep learning

• High level of horizontal connectivity between people

Page 18: Roadmaps to prevent x risks 2- presentation

Values transformation

• Rise public desire for life extension and global security • Reduction of radical religious (ISIS) or nationalistic values • Popularity of transhumanism • Change the model of the future • “A moral case can be made that existential risk reduction is strictly more im-portant than any other global public good” (Bostrom)• The value of the indestructibility of WKH�FLYLOL]DWLRQ�EHFRPHV�¿UVW�SULRULW\�RQ�DOO�OHYHOV��LQ�HGXFDWLRQ��RQ�SHUVRQDO�OHY-el and as a goal of every nation��³6XVWDLQDELOLW\�VKRXOG�EH�UHFRQFHSWXDOLVHG�LQ�G\QDPLF�WHUPV��DV�DLPLQJ�IRU�D�sustainable trajectory rather than a sustainable state” (Bostrom)��0RYLHV��QRYHOV�DQG�RWKHU�ZRUNV�RI�DUW�WKDW�KRQHVWO\�GHSLFW�[�ULVNV�DQG�PRWL-vate to their prevention• Translate best x-risks articles and books on main languages��0HPRULDO�DQG�DZDUHQHVV�GD\V��(DUWK�GD\��3HWURY�GD\��$VWHURLG�GD\��$UWL¿FLDO�HQOLJKWHQPHQW�• Rationality and effective altruism��(GXFDWLRQ�LQ�VFKRROV�RQ�[�ULVNV��VDIHW\�DQG�UDWLRQDOLW\�WRSLFV

Plan A1.2 Decentralised risk monitoring

Step 1

Page 19: Roadmaps to prevent x risks 2- presentation

Improving human

intelligence and

morality

• Nootropics, brain stimulation, and gene therapy for

higher IQ

• New rationality: Bayes probabilities, interest in long term

future, LessWrong

����(GXFDWLRQ��DQG�¿JKWLQJ�FRJQLWLYH�ELDVHV� • High empathy for new geniuses is

needed to prevent them from becoming superterrorists

• Many rational, positive, and cooperative

people are needed to reduce x-risks

• Lower proportion of destructive beliefs, risky behaviour,

����DQG�VHO¿VKQHVV • Engineered enlightenment: use brain science to make

people more unite, less aggressive; open realm of

spiritual world to everybody

• Prevent worst forms of capitalism: desire to short term

money reward, and to change rules for getting it.

Stop greedy monopolies

Plan A1.2 Decentralised risk monitoring. Step 2

Page 20: Roadmaps to prevent x risks 2- presentation

Cold war and WW3prevention

��,QWHUQDWLRQDO�FRQÀLFW�PDQDJHPHQW�DXWKRULW\�OLNH�LQWHUQDWLRQDO�FRXUW�RU�VHFUHW�LQVWLWXWLRQ��/DUJH�SURMHFW�ZKLFK�FRXOG�XQLWH�KXPDQLW\��OLNH�SDQGHPLF�SUHYHQWLRQ��5RJXH�FRXQWULHV�LQWHJUDWHG��EDVHG�RQ�GLDORJXH�DQG�DSSUHFLDWLQJ�WKHLU�YDOXHV��,QWHUQDWLRQDO�ODZ�DV�EHVW�LQVWUXPHQW�RI�FRQÀLFW�VROYLQJ��3HDFHIXO�LQWHJUDWLRQ�RI�QDWLRQDO�VWDWHV

'UDPDWLF�VRFLDO�FKDQJHV,W�FRXOG�LQFOXGH�PDQ\�LQWHUHVWLQJ�EXW�GLIIHUHQW�WRSLFV��GHPLVH�RI�FDSL-WDOLVP��KLSVWHU�UHYROXWLRQ��LQWHUQHW�FRQQHFWLYLW\��JOREDO�YLOODJH��GLVVROY-LQJ�RI�QDWLRQDO�VWDWHV��&KDQJLQJ�WKH�ZD\�WKDW�SROLWLFV�ZRUNV�VR�WKDW�WKH�SROLFLHV�LPSOHPHQWHG�DFWXDOO\�KDYH�HPSLULFDO�EDFNLQJ�EDVHG�RQ�ZKDW�ZH�NQRZ�DERXW�V\V-WHPV�

Plan A1.2 Decentralised risk monitoring.

Step 3

Page 21: Roadmaps to prevent x risks 2- presentation

Decentralized risks monitoring

• Transparent society: everybody could monitor everybody • Decentralized control (local police, net-solutions,) local police could handle local crime and WHUURULVWV��ORFDO�KHDOWK�DXWKRULWLHV�FRXOG�¿QG�DQG�SUHYHQW�GLVHDVH�VSUHDG��,I�ZH�KDYH�PDQ\�[�ULVNV�SHHUV��WKH\�FRXOG�FRQWURO�WKHLU�QHLJKERXUKRRG�LQ�WKHLU�SURIHVVLRQDO�VSDFH��$QRQ\PXV�hacker groups; google search control��³$QRQ\PRXV´�VW\OH�KDFNHU�JURXSV��ODUJH�JURXS�RI�³ZRUOG�VDYLRXUV´���1HLJKERXU�ZDWFK���HDFK�VFLHQWLVW�LQ�GDQJHURXV�¿HOG�ORRNV�IRU�VHYHUDO�KLV�IULHQGV��0RQLWRULQJ�RI�VPRNH��QRW�¿UH���VHDUFK�SUHGLFWRUV�RI�GDQJHURXV�DFWLYLW\��1HW�EDVHG�VDIHW\�VROXWLRQV��/DZV�DQG�HFRQRPLF�VWLPXOXV��5LFKDUG�3RVQHU��FDUERQ�HPLVVLRQV�WUDGH���SUL]HV�IRU�DQ\�ULVN�IRXQG�DQG�SUHYHQWHG• World democracy based on internet voting��0LOG�PRQLWRULQJ�RI�SRWHQWLDO�VLJQV�RI�GDQJHURXV�DFWLYLW\��LQWHUQHW���XVLQJ�GHHS�OHDUQLQJ��+LJK�OHYHO�RI�KRUL]RQWDO�FRQQHFWLYLW\�EHWZHHQ�SHRSOH

Plan A1.2 Decentralised risk monitoring.

Step 4

Page 22: Roadmaps to prevent x risks 2- presentation

Plan A2. Friendly AI

Study and Promotion

• Study of Friendly AI theory

• Promotion of Friendly AI (Bostrom and Yudkowsky)

• Fundraising (MIRI)

• Teaching rationality (LessWrong)

• Slowing other AI projects (recruiting scientists)

• FAI free education, starter packages in programming

Solid Friendly AI theory

• Human values theory and decision theory

• Proven safe, fail-safe, intrinsically safe AI

• Preservation of the value system during

AI self-improvement

• A clear theory that is practical to implement

Seed AI

Creation of a small AI capable

of recursive self-improvement and based on

Friendly AI theory

Superintelligent AI

• Seed AI quickly improves itself and

undergoes “hard takeoff”

• It becomes dominant force on Earth

• AI eliminates suffering, involuntary death,

and existential risks

• AI Nanny – one hypothetical variant of super

AI that only acts to prevent

existential risks (Ben Goertzel)

AI practical studies

• Narrow AI

• Human emulations

• Value loading

• FAI theory promotion to most AI commands; they agree to implement it

and adapt it to their systems

• Tests of FAI on non self-improving models

Page 23: Roadmaps to prevent x risks 2- presentation

Study and Promotion • Study of Friendly AI theory • Promotion of Friendly AI (Bostrom and Yudkowsky) • Fundraising (MIRI) • Teaching rationality (LessWrong) • Slowing other AI projects (recruiting scientists) • FAI free education, starter packages in programming

Plan A2. Friendly AI. Step 1

Page 24: Roadmaps to prevent x risks 2- presentation

Solid Friendly AI theory

• Human values theory and decision theory

• Proven safe, fail-safe, intrinsically safe AI

• Preservation of the value system during

AI self-improvement

• A clear theory that is practical to implement

Plan A2. Friendly AI. Step 2

Page 25: Roadmaps to prevent x risks 2- presentation

AI practical studies

• Narrow AI

• Human emulations

• Value loading

• FAI theory promotion to most AI commands; they agree to implement it

and adapt it to their systems

• Tests of FAI on non self-improving models

Plan A2. Friendly AI. Step 3

Page 26: Roadmaps to prevent x risks 2- presentation

Seed AICreation of a small AI capable of recursive self-improvement and based on Friendly AI theory

Plan A2. Friendly AI. Step 4

Page 27: Roadmaps to prevent x risks 2- presentation

Superintelligent AI• Seed AI quickly improves itself and undergoes “hard takeoff” • It becomes dominant force on Earth• AI eliminates suffering, involuntary death, and existential risks• AI Nanny – one hypothetical variant of super AI that only acts to prevent existential risks (Ben Goertzel)

Plan A2. Friendly AI. Step 5

Page 28: Roadmaps to prevent x risks 2- presentation

High-speed

tech development needed to quickly pass

risk window

• Investment in super-technologies (nanotech, biotech)

• High speed technical progress helps to overcome slow process of

resource depletion

Dramatic social changes

It could include many interesting but different topics: demise of capitalism, hipster

revolution, internet connectivity, global village, dissolving of national states.

Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems.

Improving human

intelligence and morality

• Nootropics, brain stimulation, and gene therapy for higher IQ

• New rationality: Bayes probabilities, interest in long term future,

LessWrong

����(GXFDWLRQ��DQG�¿JKWLQJ�FRJQLWLYH�ELDVHV� • High empathy for new geniuses is

needed to prevent them from becoming superterrorists

• Many rational, positive, and cooperative

people are needed to reduce x-risks

����/RZHU�SURSRUWLRQ�RI�GHVWUXFWLYH�EHOLHIV��ULVN\�EHKDYLRXU��DQG�VHO¿VKQHVV • Engineered enlightenment: use brain science to make people more unite, less

aggressive; open realm of spiritual world to everybody

• Prevent worst forms of capitalism: desire to short term money reward, and to

change rules for getting it. Stop greedy monopolies

Timely achievement of immortality on highest possible level

• Nanotech-based immortal body

����'LYHUVL¿FDWLRQ�RI�KXPDQLW\�LQWR�VHYHUDO�VXFFHVVRU�VSHFLHV�FDSDEOH�RI�OLYLQJ�LQ�VSDFH • Mind uploading

• Integration with AI

Miniaturization for survival and invincibility• Earth crust colonization by miniaturized nano tech bodies

• Moving into simulated world inside small self sustained computer

AI based on uploading

of its creator

• Friendly to the value system of its creator

• Its values consistently evolve during its self-improvement

• Limited self-improvement may solve friendliness

problem

Improving sustainability of civilization

• Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc

Plan A3. Rising Resilience

Page 29: Roadmaps to prevent x risks 2- presentation

Improving sustainability of civilization

• Intrinsically safe critical systems • Growing diversity of human beings and habitats • Increasing diversity and increasing connectedness of our civilization • Universal methods of prevention (resistances structures, strong medicine) • Building reserves (food stocks, seeds, minerals, energy, machinery, knowledge) • Widely distributed civil defence, including air and water cleaning systems, radiation meters, gas masks, medical kits etc

Plan A3. Rising Resilience

Step 1

Page 30: Roadmaps to prevent x risks 2- presentation

Plan A3. Rising Resilience

Step 2Useful ideas to limit catastrophe scale

Limit the impact of catastrophe by implementing measures to slow the growth and areas impacted by a catastrophe. Carnitine, improve the capacity for rapid production of vaccines in response to emerging threats or create or grow stockpiles of important medical countermeas-ure.

Increase time available for preparation by improving monitoring and early detection technologies. For example, with pandemics you could: supporting general research on the magnitude of biosecurity risks and opportunities to reduce them and improving and connect disease surveillance systems so that novel threats can be detected and respond-ed to more quickly

Worldwide x-risk prevention exercises

Page 31: Roadmaps to prevent x risks 2- presentation

High-speed

tech development needed to quickly pass

risk window

• Investment in super-technologies (nanotech, biotech)

• High speed technical progress helps to overcome slow process of

resource depletion

Dramatic social changes

It could include many interesting but different topics: demise of capitalism, hipster

revolution, internet connectivity, global village, dissolving of national states.

Changing the way that politics works so that the policies implemented actually have empirical backing based on what we know about systems.

Plan A3. Rising Resilience. Step 3

Page 32: Roadmaps to prevent x risks 2- presentation

Timely achievement of immortality on highest possible level

• Nanotech-based immortal body����'LYHUVL¿FDWLRQ�RI�KXPDQLW\�LQWR�VHYHUDO�VXFFHVVRU�VSHFLHV�FDSDEOH�RI�OLYLQJ�LQ�VSDFH����0LQG�XSORDGLQJ • Integration with AI

0LQLDWXUL]DWLRQ�IRU�VXUYLYDO�DQG�LQYLQFLELOLW\��(DUWK�FUXVW�FRORQL]DWLRQ�E\�PLQLDWXUL]HG�QDQR�WHFK�ERGLHV��0RYLQJ�LQWR�VLPXODWHG�ZRUOG�LQVLGH�VPDOO�VHOI�VXVWDLQHG�FRPSXWHU

Plan A3. Rising Resilience. Step 4

Page 33: Roadmaps to prevent x risks 2- presentation

Interstellar travel

• “Orion” style, nuclear powered “generation ships” with colonists

• Starships which operate on new physical principles with immortal people on board

• Von Neumann self-replicating probes with human embryos

Temporary

asylums in space

• Space stations as temporary asylums

(ISS)

• Cheap and safe launch systems

Colonisation of the Solar system

• Self-sustaining colonies on Mars and large asteroids

• Terraforming of planets and asteroids using

self-replicating robots and building space colonies there

• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud

Space colonies on large planets

• Creation of space colonies on the Moon and Mars (Elon Musk)

with 100-1000 people

Plan A4. Space colonisation

Page 34: Roadmaps to prevent x risks 2- presentation

Temporary asylums in space

• Space stations as temporary asylums (ISS) • Cheap and safe launch systems

Plan A4. Space colonisation

Step 1

Page 35: Roadmaps to prevent x risks 2- presentation

Space colonies on large planets

• Creation of space colonies on the Moon and Mars (Elon Musk) with 100-1000 people

Plan A4. Space colonisation

Step 2

Page 36: Roadmaps to prevent x risks 2- presentation

Colonisation of the Solar system

• Self-sustaining colonies on Mars and large asteroids

• Terraforming of planets and asteroids using

self-replicating robots and building space colonies there

• Millions of independent colonies inside asteroids and comet bodies in the Oort cloud

Plan A4. Space colonisation

Step 3

Page 37: Roadmaps to prevent x risks 2- presentation

Interstellar travel

• “Orion” style, nuclear powered “generation ships” with colonists

• Starships which operate on new physical principles with immortal people on board

• Von Neumann self-replicating probes with human embryos

Plan A4. Space colonisation

Step 4

Page 38: Roadmaps to prevent x risks 2- presentation

Interstellar distributed

humanity • Many unconnected human civilizations • New types of space risks (space wars, planets and stellar explosions, AI and nanoreplicators, ET civilizations)

Result:

Page 39: Roadmaps to prevent x risks 2- presentation

Plan B. Survive the catastrophe

Preparation • Fundraising and promotion • Textbook to rebuild civilization (Dartnell’s book “Knowledge”) • Hoards with knowledge, seeds and raw materials (Doomsday vault in Norway) • Survivalist communities

Building

• Underground bunkers, space colonies • Nuclear submarines • Seasteading

Natural refuges

• Uncontacted tribes • Remote villages • Remote islands • Oceanic ships • Research stations in Antarctica

Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes

Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos

High-tech bunkers • Nano-tech based adaptive structures • Small AI with human simulation inside space rock

Page 40: Roadmaps to prevent x risks 2- presentation

Preparation • Fundraising and promotion

• Textbook to rebuild civilization

(Dartnell’s book “Knowledge”)

• Hoards with knowledge, seeds and

raw materials (Doomsday vault in

Norway)

• Survivalist communities

Plan B. Survive the catastrophe

Step 1

Page 41: Roadmaps to prevent x risks 2- presentation

Building

• Underground bunkers, space colonies

• Nuclear submarines

• Seasteading

Natural refuges

• Uncontacted tribes

• Remote villages

• Remote islands

• Oceanic ships

• Research stations in Antarctica

Plan B. Survive the catastrophe

Step 2

Page 42: Roadmaps to prevent x risks 2- presentation

Readiness • Crew training • Crews in bunkers • Crew rotation • Different types of asylums • Frozen embryos

Plan B. Survive the catastrophe

Step 3

Page 43: Roadmaps to prevent x risks 2- presentation

High-tech bunkers

• Nano-tech based adaptive structures

• Small AI with human simulation inside space rock

Plan B. Survive the catastrophe

Step 4

Page 44: Roadmaps to prevent x risks 2- presentation

Rebuilding civilisation after catastrophe • Rebuilding population • Rebuilding science and technology • Prevention of future catastrophes

Plan B. Survive the catastrophe

Step 5

Page 45: Roadmaps to prevent x risks 2- presentation

Reboot of the civilization

• Several reboots may happen • Finally there will be total collapse or a new supercivilization level

Result:

Page 46: Roadmaps to prevent x risks 2- presentation

Time capsules with information

• Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks)

Preservation of earthly life

• Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats)

Messages to

ET civilizations

• Interstellar radio messages with encoded

human DNA

• Hoards on the Moon, frozen brains

����9R\DJHU�VW\OH�VSDFHșUDIWV�ZLWK�LQIRUPDWLRQ� about humanity

Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI

Plan C. Leave backups

Page 47: Roadmaps to prevent x risks 2- presentation

Time capsules with information

• Underground storage with information and DNA for future non-human civilizations • Eternal disks from Long Now Foundation (or M-disks)

Plan C. Leave backups

Step 1

Page 48: Roadmaps to prevent x risks 2- presentation

Messages to

ET civilizations

• Interstellar radio messages with encoded

human DNA

• Hoards on the Moon, frozen brains

����9R\DJHU�VW\OH�VSDFHșUDIWV�ZLWK�LQIRUPDWLRQ� about humanity

Plan C. Leave backups. Step 2

Page 49: Roadmaps to prevent x risks 2- presentation

Preservation of earthly life

• Create conditions for the re-emergence of new intelligent life on Earth • Directed panspermia (Mars, Europe, space dust) • Preservation of biodiversity and highly developed animals (apes, habitats)

Plan C. Leave backups. Step 3

Page 50: Roadmaps to prevent x risks 2- presentation

Robot-replicators in space • Mechanical life • Preservation of information about humanity for billions of years • Safe narrow AI

Plan C. Leave backups. Step 4

Page 51: Roadmaps to prevent x risks 2- presentation

Resurrection by another civilization

• Creation of a civilization which has a lot of common values and traits with humans • Resurrection of concrete people

Result:

Page 52: Roadmaps to prevent x risks 2- presentation

Quantum immortality

• If the many-worlds interpretation of QM is true, an observer will survive any sort of

death including any global catastrophe (Moravec, Tegmark)

• It may be possible to make almost univocal correspondence between observer

survival and survival of a group of people (e.g. if all are in submarine)

��$QRWKHU�KXPDQ�FLYLOL]DWLRQV�PXVW�H[LVW�LQ�WKH�LQ¿QLWH�8QLYHUVH

Saved by non-human

intelligence

• Maybe extraterrestrials

are looking out for us and will

save us

• Send radio messages into space

asking for help if a catastrophe is

inevitable

• Maybe we live in a simulation

and simulators will save us

• The Second Coming,

a miracle, or life after death

Strange strategy

to escape

Fermi paradox

Random strategy may help us to escape

some dangers that killed all previous

civilizations in space

Technological

precognition

• Prediction of the future based

on advanced quantum technology and avoiding

dangerous world-lines

• Search for potential terrorists

using new scanning technologies

• Special AI to predict and prevent new x-risks

Manipulation of the

extinction probability

using Doomsday argument

• Decision to create more observers in case of

unfavourable event X starts to happen, so lowering its

SUREDELOLW\��PHWKRG�81���E\�%RVWURP�• Lowering the birth density to get more time for

the civilization

Control of the simulation

(if we are in it)

• Live an interesting life so our sim-

ulation isn’t switched off

• Don’t let them know that we

know we live in simulation

• Hack the simulation and

control it

��1HJRWLDWLRQ�ZLWK�WKH�VLPXODWRUV or pray for help

Plan D. Improbable ideas

Page 53: Roadmaps to prevent x risks 2- presentation

Saved by non-human intelligence

• Maybe extraterrestrials are looking out for us and will save us • Send radio messages into space asking for help if a catastrophe is inevitable • Maybe we live in a simulation and simulators will save us • The Second Coming, a miracle, or life after death

Plan D. Improbable ideas

Idea 1

Page 54: Roadmaps to prevent x risks 2- presentation

Quantum immortality• If the many-worlds interpretation of QM is true, an observer will survive any sort of death including any global catastrophe (Moravec, Tegmark)• It may be possible to make almost univocal correspondence between observer survival and survival of a group of people (e.g. if all are in submarine)��$QRWKHU�KXPDQ�FLYLOL]DWLRQV�PXVW�H[LVW�LQ�WKH�LQ¿QLWH�8QLYHUVH

Plan D. Improbable ideas. Idea 2

Page 55: Roadmaps to prevent x risks 2- presentation

Strange strategy to escape

Fermi paradoxRandom strategy may help us to escape some dangers that killed all previous civilizations in space

Plan D. Improbable ideas

Idea 3

Page 56: Roadmaps to prevent x risks 2- presentation

Technological precognition

• Prediction of the future based on advanced quantum technology and avoiding dangerous world-lines• Search for potential terrorists using new scanning technologies• Special AI to predict and prevent new x-risks

Plan D. Improbable ideas

Idea 4

Page 57: Roadmaps to prevent x risks 2- presentation

Manipulation of the extinction probability

using Doomsday argument

• Decision to create more observers in case of unfavourable event X starts to happen, so lowering its probability (method UN++ by Bostrom)• Lowering the birth density to get more time for the civilization

Plan D. Improbable ideas

Idea 5

Page 58: Roadmaps to prevent x risks 2- presentation

Control of the simulation

(if we are in it)

• Live an interesting life so our sim-

ulation isn’t switched off

• Don’t let them know that we

know we live in simulation

• Hack the simulation and

control it

• Negotiation with the simulators

or pray for help

Plan D. Improbable ideas

Idea 6

Page 59: Roadmaps to prevent x risks 2- presentation

Prevent x-risk research because it only increases risk

• Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob-al catastrophe it maybe unreasonable to try to change the probability • Do nothing

Controlled regression

• Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science ��$QWLJOREDOL]P�DQG�GLYHUVL¿FDWLRQ�LQWR�PXOWLSRODU�ZRUOG

Attracting good outcome by positive thinking

• Preventing negative thoughtsabout the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind FRQWURO�WR�¿QG�SRWHQWLDO�WHUURULVWV�DQG�superpowers to stop them • Start partying now

Depopulation

• Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Birth control (Bill Gates) • Deliberate small catastrophe (bio-weapons)

Unfriendly AI may be better than nothing

• Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some humanvalues and distribute them through the Universe

Bad plans

Page 60: Roadmaps to prevent x risks 2- presentation

Prevent x-risk research because it only increases risk

• Do not advertise the idea of man-made global catastrophe • Don’t try to control risks as it would only give rise to them • As we can’t measure the probability of glob-al catastrophe it maybe unreasonable to try to change the probability • Do nothing

Bad plans Idea 1

Page 61: Roadmaps to prevent x risks 2- presentation

Controlled regression

• Use small catastrophe to prevent large one (Willard Wells) • Luddism (Kaczynski): relinquishment of dangerous science • Creation of ecological civilization without technology (“World made by hand”, anarcho-primitivism) • Limitation of personal and collective intelligence to prevent dangerous science ��$QWLJOREDOL]P�DQG�GLYHUVL¿FDWLRQ�LQWR�PXOWLSRODU�ZRUOG

Bad plans Idea 2

Page 62: Roadmaps to prevent x risks 2- presentation

Bad plans. Idea 3Depopulation

• Could provide resource preservation and make control simpler • Natural causes: pandemics, war, hunger (Malthus) • Extreme birth control • Deliberate small catastrophe (bio-weapons)

Page 63: Roadmaps to prevent x risks 2- presentation

Unfriendly AI may be better than nothing

• Any super AI will have some memory about humanity • It will use simulations of human civilization to study the probability of its own existence • It may share some humanvalues and distribute them through the Universe

Bad plans. Idea 4

Page 64: Roadmaps to prevent x risks 2- presentation

Attracting good outcome by positive thinking

• Preventing negative thoughtsabout the end of the world and about violence • Maximum positive attitude “to attract” positive outcome • Secret police which use mind FRQWURO�WR�¿QG�SRWHQWLDO�WHUURULVWV�DQG�superpowers to stop them • Start partying now

Bad plans. Idea 5

Page 65: Roadmaps to prevent x risks 2- presentation

Dynamic  roadmaps

Next  stage  of  the  research  will  be  creation  of  collectively  editable  wiki-­‐style  roadmaps  

They  will  cover  all  existing  topics  of  transhumanism  and  future  studies  

Create  AI  system  based  on  the  roadmaps  or  working  on  their  improvement

Page 66: Roadmaps to prevent x risks 2- presentation

You  could  read  all  roadmaps  on

http://immortality-­‐roadmap.com  

!

http://existrisks.org/


Recommended