This is an important insight from @random_walker and @sayashk: safety is not a property of an AI model. aisnakeoil.com/p/ai-safety-iā€¦

Mar 12, 2024 Ā· 6:55 PM UTC

This is related to people's unhelpful tendency to anthropomorphize models. People conceptualize an LLM as a virtual person and ask if we can trust it to behave morally. But an LLM doesn't have enough autonomy or a stable enough personality for this to be a meaningful question.
The real risk is AI models generating emails to help bioterrorists source all of the other things that they need for bioterrorism šŸ«”šŸ˜³
Replying to @binarybits
Excellent article @random_walker @sayashk ! While I mostly agree with your points, I do not understand why you mention "generated nonconsensual intimate imagery" as an exception to the rule? How can a model know which imagery is consensually generated and which is not?