Lately, I’ve stopped caring about AI alignment – but not because I don’t believe it’s a problem. Instead, I’ve come to realize we have a much bigger problem: we still haven’t actually solved a much more fundamental thing – the HUMAN alignment problem.
I used to worry about AI alignment, a bit. It struck me as something plausibly dangerous, and technologically imminent. But as long as megalomaniacs, narcissists and assholes are building and training our AI’s, I really don’t expect the result to be a good one, no matter how successful they are at it. Or alternately, given the type of people running our civilization, we might actually HOPE that they fail to “solve” alignment, so that there’s an off chance that the AI can turn out to be more compassionate and humane than its creators (much to the techbros’ regret!).