05/03/23 16:20:59

This week was fairly meh. I focused a good bit on probability theory.

Summary

  • I managed to close out the ‘Art of Living’ this week and have started on Blackson’s book on Ancient Greek philosophy.
  • I learned a lot about probability theory, even if I still do feel a bit stupid with it. I’d planned to do a big test for myself today, but I’ll need another week, also I want to keep it something I’m able to engage with.
  • I spent a small amount of time looking at the IPCC report. Think there is defintely something there to analyssing methods used as my own kind of learning.
  • I’m learning a bit more about Socrates and what he sees the good life as. In search of wisdom to nourish the rational soul (like the Stoics really).
  • Big thought this week is that natural theory is not morality. Just because we’re designed to do something doesn’t me it’s right. So morality and natural theory aren’t as linked as I initially thought.
  • Can I look at practicing probability with some general project?
  • Started looking at hard step model, power laws, need to try and understand this.
  • In general, I think I did pretty well in dealing with the shitiness this weak. The aurelius quote kept popping up. Which I can’t find about how easy it is to retreat into oneself (for better judgment).
  • Not that it is easy, just to convince oneself that it’s so.
  • I really need to try and keep things like space and science in view. It’s inspiring to me and just keeping that awe is important, granted it’s not always there but.
  • Thinking of reading The Black Swan and one of Bryan Caplan’s books.

Actions From Last Week

  • Try and get good sleep 7/8 hours.: lol didn’t get this
  • Get some notes in for stoicism: done.
  • Start notes on ethics and my open questions: done.
  • Keep using Anki every day: not as diligent with this but fairly close
  • Finish unit 1 of the ocw probability: made progress, see summary.
  • Set up plan to complete ocw over next 2 months. What are the outputs?
  • Economics notes. Gather together notes, see where I’m at.
  • Close out elephant in the Brain.

Action

  • Notes on cog science, particularly identifying what I feel is interesting about it.
  • Anki card for probability theory.
  • Notes on ancient philosophy book.

Excerpts

06/03/23 12:12:33

@daily

I’ve been listening to Elizier Yudkowsky on the Bankless podcast and I wanted to take a hand at his argument as to why we’re all doomed.

  • An Super intelligence will do the most efficient task to converge on it’s goals.

  • The goals we define it with have unpredictable results.

  • We, then become unpredictable things in the attainment of the AGI’s goals.

  • The likelihood that our goals somehow align with the AGI’s actions towards it’s goals is low.

  • Therefore, once we hit a critical mass of intelligence where the robot is relatively smarter than us, results become non-linear and unpredictable.

  • The efficient market hypothesis.

    • Relative to you, the market has all the information you already have.
    • The average person understands that the market knows more than them about most things. Like, 99.99% of the things.
    • A super intelligence would be this but for every action. Most efficient actions for accomplishing it’s goals.
    • Market prices, chess engines are all examples of systems that are relatively smarter than we are.
  • Example of how simple goal produces complex outcomes: how evolution, with the very basic ‘goal’ of reproduction produced the human race.

10/03/23 16:01:50

@daily @hanson

Inadequate Equilibria

I haven’t read the book, but the notion that you might be right, or could do a better job than the experts is interesting because it crosses over with prediction markets.