Hian Goh’s Post

View profile for Hian Goh, graphic

Leopoldo Aschenbrenner’s bold essay of the AI superintelligence emergence is worth reading. Pages 7-46 - Its hard to comprehend exponential growth. But we went from a 5 year old to a 18 year old in intelligence in 2 years Page 46-74 - How we gain critical mass and super intelligence occurs Page 74-89 - We will need crazy amounts of power and resources. But if AGI is a national security threat this will happen. The rest is more a discourse about freedom, life liberty and the oppenheimerisation of AI arms race. In case you dont’ know Leopoldo did University at 16 and graduated valedictorian at 19 and was OpenAI safety after he was fired. My prediction? Probability of hyper intelligence by 2027 - 20% Probability of hyper intelligence by 2030 - 50% Probability of this not happening as people say this is just crazy and it will never happen? Non Zero. We need to be aware of non zero outlier black swan events. This fits the classic black swan. 1. It’s a non zero event that could probably happen 2. When it happens it has massive change 3. After it happens - we all back rationalize that it was obvious. Where do you stand on all of this? I believe in this future. We need to be prepared. https://lnkd.in/gGWYYWqm Image from the essay on page 48. full document can be found here. Leopold Aschenbrenner

  • No alternative text description for this image

I like how this post has 10 percent of the usual engagement- really shows how nobody is noticing this incredible change.

Kenneth Ho

Entrepreneurship Community & Conferences

1mo

This may come sooner than we expect — Coupled with humanoid robotics that self learn I wonder what the whole financial system will be like in the not too distance future

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics