06/03/23 12:12:33

@daily

I’ve been listening to Elizier Yudkowsky on the Bankless podcast and I wanted to take a hand at his argument as to why we’re all doomed.

  • An Super intelligence will do the most efficient task to converge on it’s goals.

  • The goals we define it with have unpredictable results.

  • We, then become unpredictable things in the attainment of the AGI’s goals.

  • The likelihood that our goals somehow align with the AGI’s actions towards it’s goals is low.

  • Therefore, once we hit a critical mass of intelligence where the robot is relatively smarter than us, results become non-linear and unpredictable.

  • The efficient market hypothesis.

    • Relative to you, the market has all the information you already have.
    • The average person understands that the market knows more than them about most things. Like, 99.99% of the things.
    • A super intelligence would be this but for every action. Most efficient actions for accomplishing it’s goals.
    • Market prices, chess engines are all examples of systems that are relatively smarter than we are.
  • Example of how simple goal produces complex outcomes: how evolution, with the very basic ‘goal’ of reproduction produced the human race.