Current plan as of July 10, 2023
My goal is to orient in the field of AI safety. I want good enough models of the scenarios, risks, state of the field, and personal fit, than I can generate a small list of high impact actions I could take and their theory of change.
I’m looking to generate actions like “train in this domain to get this job in this org”, or “start this org to develop this product to solve this problem to reduce this risk”, or “research this topic to understand this risk to plan how to avoid it in this scenario”.
Once I have this list, I will move to a phase of trying my fit in each of them.
The goal is to improve the second phase plan by doing more preliminary research:
Gather techniques and advices from this research, and add details to my initial day-to-day process in the second phase.
Get advice from lots of people in different parts of AI safety on this plan
Find advisors, mentors, and people to bounce ideas with and do regular accountability checkups.
The goal is to build my models starting with those four components