“The more I learn, the more I realize how much I don’t know.”
– Albert Einstein
From Amodei and Dario, et al  , accidents are defined as: unintended and harmful behavior that may emerge from poor design of real-world AI systems.
One major class of AI accidents are unintended side effects, which can result from a poorly defined loss function, amongst other things.
The second class of accidents involve reward hacking, and this is when the AI/AGI figures out a way to manipulate its reward function to get the reward without doing the intended task.
The final class of AI accidents we discussed was scalable oversight, and that dealt with how an AI could interact with the world given expensive cost functions, and how to deal with this.
Regulation is most likely going to be required, but how much? And by whom?
What are some of the downsides and upsides of such regulation?
View Full Lecture PDF Below: