But isn't the potential problem that AI only truly becomes useful when we don't have to check it? In effect it becomes sentient. At which point we're no longer necessary.
If AI ever became sentient in that manner, I think the only possible sensible conclusion it could come to is that there needs to be a reduction in humans. It's what we've done with species that become overpopulated and are causing huge changes to the environment and ecosystem.
If you wrote down all the things we're causing but changed it from humans to rats, or deer, or beavers etc. you know full well the course of action would be to have a cull, but we obviously don't do that because it's our own species. AI would have no such qualms.