Customer support AI goes out of control, causing reputation of code editor AI 'Cursor' developer to plummet



It turns out that the chat AI implemented for customer support spread false information, damaging the company's reputation. IT professionals see this story as a lesson, saying, 'It's not good to rely entirely on AI.'

Cursor's AI glitch triggers viral fallout—and raises questions about chatbot reliability | Fortune

https://fortune.com/article/customer-support-ai-cursor-went-rogue/

The problem with the support AI was caused by AI startup Anysphere. Anysphere is a company that has achieved remarkable growth, reaching annual sales of $100 million (about 14.5 billion yen) by developing 'Cursor,' an editor that incorporates AI code generation and text generation as standard functions, and provides AI assistants to customers while also using them in-house as customer support personnel.

However, there were cases where this customer support AI assistant gave customers incorrect information.



It all started with a complaint from a Cursor user who was experiencing a problem where they were unexpectedly logged out of the service when they switched devices. When they contacted customer support, a support representative named Sam replied that the logout was expected behavior based on the new policy.

However, it turns out that there is no new policy, and that the character Sam is fictitious. Sam's response is from an AI, and that the new policy is a 'hallucination,' a completely fabricated explanation.

The news spread quickly in the developer community, with users reporting that they were cancelling Cursor, and some complaining about Anysphere's lack of transparency.

In response to the confusion, Anysphere co-founder Michael Truell said in a statement, 'We are investigating the bug that logged users out. We apologize for any confusion.'



Tech influencers have warned other startups to take this as a lesson for the future, and former Google chief scientist Cathy Kozilkov said, 'This mess could have been avoided if leaders had understood that AI makes mistakes, that AI can't be held accountable for its mistakes, and that users hate being fooled by machines masquerading as humans.'

Amiram Shachar, CEO of security company Upwind, points out that this isn't the first time a hallucinating AI chatbot has made headlines: In 2022, the airline Air Canada was involved in a lawsuit after its AI chatbot misled customers about a discount program.

Court orders Air Canada to pay damages after the company said it was not responsible for chatbot erroneous answers - GIGAZINE



'Fundamentally, the AI doesn't understand what the user is thinking,' Shachar said. 'Without the right constraints, the AI will be overconfident in providing answers that it doesn't back up with. Highly technical users, like Cursor's customers, won't tolerate any half-hearted explanations.'

in Software, Posted by log1p_kr