1.1 Priority objection: AGI is Too Far so it isn't worth worrying about
1.2 Priority objection: A Soft Takeoff is more likely and so we will have Time to Prepare
1.3 Priority objection: There is No Obvious Path to Get to AGI from Current AI
1.4 Priority objection: Something Else is More Important than AI safety / alignment
1.5 Priority objection: Short Term AI Concerns are more important than AI safety
Arguments in this category typically grant the risk proposition, but think there are other priorities more important, agi is too far, etc. Thoughts:
2.1 Technical Objection: AI / AGI Doesn’t Exist, developments in AI are not necessarily progress towards AGI
2.2 Technical Objection: Superintelligence is Impossible
2.3 Technical Objection: Self-Improvement is Impossible
2.4 Technical Objection: AI Can’t be Conscious Proponents argue that in order to be dangerous AI has to be conscious
2.5 Technical Objection: AI Can just be a Tool
2.6 Technical Objection: We can Always just turn it off
2.7 Technical Objection: We can reprogram AIs if we don't like what they do
2.8 Technical Objection: AI Doesn't have a body so it can't hurt us
2.9 Technical Objection: If AI is as Capable as You Say, it Will not Make Dumb Mistakes
2.10 Technical Objection: Superintelligence Would (Probably) Not Be Catastrophic
2.11 Technical Objection: Self-preservation and Control Drives Don't Just Appear They Have to be Programmed In
2.12 Technical Objection: AI can't generate novel plans
(From Kaj Sotola - Disjunctive Scenarios of AI Risk) Core arguments for AI safety can often be reduced to:
3.1 AI Safety Objections: AI Safety Can’t be Done Today
3.2 AI Safety Objections: AI Can’t be Safe
4.1 Ethical Objections: Superintelligence is Benevolence
4.2 Ethical Objections: Let the Smarter Beings Win
5.1 Biased Objections: AI Safety Researchers are Non-Coders
5.2 Biased Objections: Majority of AI Researchers is not Worried
5.3 Biased Objections: Keep it Quiet
5.4 Biased Objections: Safety Work just Creates an Overhead Slowing Down Research
5.5 Biased Objections: Heads in the Sand
5.6 Biased Objections: If we don't do it, Someone else will
5.7 Biased Objections: AI Safety Requires Global Cooperation
6.1 Miscellaneous Objection: So Easy it will be Solved Automatically
6.2 Miscellaneous Objection: AI Regulation Will Prevent Problems
The development of full artificial intelligence could spell the end of the human race.
Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.
AI could view us as a threat
This is an existential threat
AI Has the potential to destroy civilization
An open letter calling to pause giant ai experiments more powerful than gpt-4 has been signed by many researchers steeped in the field
Visit pauseai.info and futureoflife.org to contribute to AI Safety and AI Governance