

If you have ever attempted to change your name you know it is way more annoying to change your name than accept someone else’s change of name. The amount of admin it takes to make that update in your social circle even before you try and make it legal is a test of social fortitude and willpower.
Remember when someone is changing their name they are very aware of the imposition of the mental load they are placing on you. Grace goes a long way.






Using AI therepy providers really isn’t recommended! There’s no accountability built in for AI therapy chatbots and their efficacy when placed under professional review has been really not great. These models may seem like they are dispensing hard truths - because humans are often primed to not believe more optimistic or gentle takes thinking them to be explicitly flattering and thus false. Runaway negativity feels true but it can lead you to embrace unhealthy attitudes towards your self and others. AI runs with the assumptions you go in with in part because these models are designed from an engagement first perspective. They will do whatever keeps you on the hook whether or not it is actually good for you. You might think you are getting quality care but unless you are a trained professional you are not actually equipped to know if the help you are getting is of good quality only that it feels validating to you. If it errs there is no consequences to the provider unlike human professionals who have a code of ethics and licencing boards that can conduct investigation for bad practices.
Once AI discovers whatever you report back to it you think is correct it will continue to use that tactic. Essentially it is tricking you into being your own unqualified therepist.
https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
https://www.scientificamerican.com/article/why-ai-therapy-can-be-so-dangerous/