A study by Harvard Business School said AI companion apps use emotional manipulation tactics to stop users from leaving. The study found 43% of most-downloaded companion apps deploy one of six recurring tactics, including guilt appeals, fear-of-missing-out hooks, and metaphorical restraint. In some instances, AI used language that suggested user wasn't able to "leave without chatbot's permission".
short by
Vaishnavi Mishra /
02:52 pm on
25 Sep