<aside>
💡 Initially written for EAG Bay Area February 2024
</aside>
What is this project?
TL;DR: International AI governance workshop between economic policymakers of the Asia-Pacific region.
The PECC
The Pacific Economic Cooperation Council (PECC) is a side organization of APEC. It’s a multi-stakeholder instutional think-tank, bringing together academia, businesses and governments to
This organization includes countries in the Asia-Pacific region which are strategic for AI, such as China and the US.
The PECC committees regularly organize workshops to present important economic topics of the region (eg. deep sea mining in the Pacific).
The workshop
Nicolas Miailhe (The Future Society: AI gov think tank) got the support from the French, American and Singaporean committees to organize a workshop on AI governance.
The workshop will happen in San Francisco, on September 18 and 19.
Theory of Change
Workshop direct outcomes
- A set of AI policy recommendations is signed by all actors, which says:
- It contains policy recommendation like:
- Need to invest in AI safety
- National AI safety institutes need to cooperate
- Need to regulate now in a way which stays ahead of future developments
- Probably more
- It has legitimacy because it is signed by lots of representatives of lots of countries and organizations
- Attendees have better AI governance takes:
- They see what is the current state of AI
- They see the technological development trends, where the technology is going to be in three years, not just the current state.
- They see the technology is dangerous and safety issues will not be solved by default
- They see economic incentives are pushing AI developers towards dangerous situations
- They see safety interests are aligned with economic interests. They don’t oppose innovation and regulation in AI
- The Overton Window expands. They are aware of more type of risks, especially the catastrophic and long-term ones.
- Attendees have access to a network of experts they trust who know what the risks are
- Those experts are more oriented towards AI safety than the base distribution of experts in groups like OECD.AI
- Some AI safety experts there are Asian and can more easily work with Asian countries maybe? (China, Japan, Korea)
Longer term outcomes caused by the workshop
- AI safety summit in France is more pro safety
- Recommendations from the workshop are reused in the summit
- How: Recommendations are given to the French Government and they reuse
- Why it works: Nicolas has contacts with the organizers of the French AI Safety Summit, and he will send the recommendations of the workshop to them.
- Why it works: Recommendations have weight because signed (or at least consensus) by lots of representatives
- People who attended the workshop also attends or organize the summit
- How: We sent invitations to the organizers of the Summit. Nicolas knows them and will ask them personally to come
- APEC will work on AI safety and lead to international cooperation
- How: New committee on AI safety & governance, which works with existing committees: standardization and digital economy
- This committee will drive cooperation across APEC countries
- Recommendation from PECC workshops automatically gets included in APEC policy somehow?
- Lots of people from PECC and APEC were at the workshop, they now have better takes
- APEC/PECC and AI safety experts know each other and can work together
- Other international organizations like APEC, OECD, European Commission work more on AI safety
- European Commission
- Present the recommendations to the commission
- People who work there come to the workshop and improve their AI safety takes
- Also: OECD, UN, DG Connect, French government, British government
- People who work there come to the workshop and improve their AI safety takes