Many agree there is plenty of reason to worry about existing A.I., including how it perpetuates structural racism, invades privacy, erodes workers’ rights, and entrenches monopolistic firms. But might a future A.I. also take over and dominate, or potentially even destroy humanity, like some Skynet-like scenario? Some technologists worry it might, and so does Ethan.
In fact, Ethan thinks that A.I. is one of the biggest threats known to humanity. The rest of us aren’t as convinced. We spend the episode debating the issue, in a mostly 1v3 dynamic of Ethan attempting to convince the rest of us. On this episode, we once again break our ‘rule’ and engage with an argument that technically isn’t peer-reviewed. We read Richard Ngo, an AI governance researcher at Open AI. He has white paper on A.I. safety first principles, which you can find here.