This is related to people's unhelpful tendency to anthropomorphize models. People conceptualize an LLM as a virtual person and ask if we can trust it to behave morally. But an LLM doesn't have enough autonomy or a stable enough personality for this to be a meaningful question.
Excellent article @random_walker@sayashk ! While I mostly agree with your points, I do not understand why you mention "generated nonconsensual intimate imagery" as an exception to the rule? How can a model know which imagery is consensually generated and which is not?