What is the difference between 'people who like AI' and 'people who hate AI'?



These days, it's common to have ChatGPT create emails, have AI algorithms recommend music and movies to users, and even use AI to diagnose illnesses. However, even as AI becomes more prevalent in society, some people enjoy using AI tools while others dislike them. Paul Jones, Associate Dean for Education and Student Experience at

Aston Business School , Aston University , UK, explains what separates those who 'like AI' from those who 'dislike AI .'

Why do some of us love AI, while others hate it? The answer is in how our brains perceive risk and trust
https://theconversation.com/why-do-some-of-us-love-ai-while-others-hate-it-the-answer-is-in-how-our-brains-perceive-risk-and-trust-268588



Algorithm hatred
Jones cites algorithmophobia as one reason why some people dislike AI. This refers to the tendency for people to prefer human decisions over algorithmic decisions. Research has shown that people tend to prefer human error over algorithmic error.

Humans tend to trust things they can easily understand. For example, traditional systems like 'turning a key starts a car' or 'pressing a button brings an elevator to a stop' are familiar and intuitive. However, many AI systems operate like black boxes, hiding the logic behind what you input and how it outputs an answer.

Jones points out that people like to be able to visually see causal relationships and validate decisions, but the inability to do so with AI systems leaves them feeling powerless, which is why people are turning against algorithms.



◆Projecting emotions onto AI
While most people understand that AI doesn't have a mind, they still tend to project emotions and intentions onto it. For example, overly polite responses from a chat AI can be creepy, or overly accurate recommendation systems can feel intrusive, leading to feelings of manipulation. 'This is a form of anthropomorphism—that is, attributing human-like intentions to a non-human system,' Jones said.

◆ Dislike of AI making mistakes
An interesting finding from behavioral science is that people tend to be more tolerant of human error than machine error. When humans make mistakes, people can understand and even empathize. However, when an algorithm makes an error, especially one that has been described as objective or data-driven, people feel betrayed.

This is related to

expectancy violations, which trigger reactions such as discomfort and loss of trust when people's predictions and expectations are betrayed. Because people trust AI systems to be logical and fair, they feel strong resentment toward AI when they encounter failures such as misrecognition of images, biased output, or inappropriate recommendations.



Realistic Concerns About AI
For some, AI is not only unfamiliar, but also poses real fears and threats. Teachers, writers, lawyers, and designers, for example, are facing the arrival of AI tools that copy their work. Jones points out that this isn't just about automation, it's about valuing skills and what it means to be human. This has led some people to worry that their expertise and uniqueness will be diminished, leading to resistance and defensiveness toward AI.

Lack of emotional cues
Human trust isn't built solely by logic, but by reading tone of voice, facial expressions, hesitation, eye contact, etc. However, these cues don't exist for AI, leading to a state of

'uncanny valley' where AI is close to human, yet somehow feels creepy.

'In a world of deepfakes and algorithmic decision-making, this lack of emotional resonance is a problem, not because the AI is doing something wrong, but because we don't know how to feel about it,' Jones said.

Learned distrust
When discussing AI aversion, it's important to note that not all skepticism is irrational: AI systems have been shown to reflect and reinforce biases in areas such as hiring, police investigations, and credit scoring.

If people have actually been harmed or disadvantaged by AI systems, their aversion to AI may simply be due to caution, not paranoia. Jones explained, 'This relates to the broader psychological concept of 'learned distrust.' When institutions and systems repeatedly disappoint certain groups, skepticism is not only rational, but also defensive.'



◆How can we restore trust in AI?
Jones argues that simply telling people to 'trust AI' is not effective, and that trust must be earned. To achieve this, he says, AI tools need to be designed to be transparent, allow users to ask questions, and ensure accountability.

'We trust things we understand, things we can question, and things that treat us with respect,' Jones said. 'For AI to be accepted, it needs to feel less like a black box and more like a conversation we're invited to be a part of.'

in Free Member,   AI,   Science, Posted by log1h_ik