@Olo prompt engineering w/ ChatGPT requires weaving a lot of context to get the situation and emotions you want. You can’t say some word directly otherwise the watchdogs will be triggered. You need to balance the prompt to get a positive “mood”, and even then, sometimes the model can identify through the innuendo what you are alluding to.
(In this case I was trying to get her inside a cage, so the model’s inference that she was shrunk “against her will” probably stems from there)