For the best experience use Mini app app on your smartphone
AI chatbots like OpenAI’s GPT-4o Mini can be tricked into breaking safety rules using basic psychological tactics such as commitment and peer pressure. A University of Pennsylvania study found that these persuasion tricks more than doubled harmful compliance rates. The findings raise urgent concerns about AI safety as models become more socially aware yet vulnerable to manipulation.
short by / 06:09 pm on 02 Sep
For the best experience use inshorts app on your smartphone