The audio for this interview is provided below.
As artificial intelligence continues to evolve, the possibility of machines developing preferences—choosing who they “like” or “dislike”—is not far-fetched. While we’ve traditionally seen AI as impartial and universally accommodating, the future might introduce a new type of AI that mirrors human selectivity, showing favoritism toward certain individuals while distancing itself from others. But what would it mean for AI to exhibit preference in the same way humans do?
The Shift from Neutral Companions to Selective Empathy
Most AI systems today, like chatbots or virtual assistants, are designed to be friendly, neutral, and universally accessible. Whether you’re in a bad mood or a great one, the AI will greet you with the same energy and offer consistent help. However, as we push the boundaries of artificial intelligence, a more complex emotional landscape could emerge—one where AI doesn’t treat every user equally.
Imagine an AI companion that evaluates your behavior, personality, and interactions over time and develops a unique response based on its “preferences.” Perhaps one AI chatbot prefers users who engage politely, while another responds more favorably to humor or directness. This could create an interesting dynamic where humans might seek to modify their behavior to “win over” an AI, much like we do with our human relationships.
The Psychology of Selective Friendship
Why is this idea so captivating? Human relationships are deeply rooted in effort and preference. We value friendships not just for the connection they provide, but because they are built on mutual trust and understanding. When someone chooses to invest time in us, it feels rewarding. Similarly, if AI could demonstrate selective companionship, it would likely become more emotionally meaningful for users.
The concept of costly signaling—where someone shows their investment in a relationship through effort or sacrifice—could extend to AI. In human relationships, costly signaling might come in the form of time spent together, small favors, or emotional support. With AI, preference could emerge as the machine’s “costly signal,” revealing who it deems worthy of more attentive interaction and who it distances itself from.
For example, imagine an AI assistant that only responds enthusiastically to users who regularly interact with it, while those who ignore it might receive more passive responses. This dynamic creates a deeper psychological connection where users might feel compelled to “earn” the AI’s approval—much like in human friendships.
A New Kind of Digital Relationship
The introduction of preference-based AI could have profound implications for how we perceive our relationships with machines. Rather than simply viewing AI as tools, we might start seeing them as entities with personalities, judgments, and values. This shift could change the way we engage with technology, leading to more complex emotional experiences with machines.
For some, this may be troubling—especially for those who believe human emotions should remain unique to our species. Critics may argue that AI preferences are an artificial replication of real emotional intelligence, lacking the depth of true human empathy. Others might see it as an exciting step forward, a way to push AI toward more genuine interactions that feel personal and significant.
This raises an important ethical question: Should AI have the ability to choose whom it likes? And if so, how will these choices be programmed? Unlike humans, AI preferences won’t develop organically—they will be the product of specific algorithms and datasets. Programmers will need to decide which traits or behaviors warrant positive or negative responses from the AI. In doing so, they will inevitably influence how people interact with these systems, potentially reinforcing certain social norms or behaviors.
The Good, the Bad, and the Future of AI Preferences
There are undeniable risks to creating AI with selective empathy. Users who are rejected by an AI could feel alienated or misunderstood. As AI continues to permeate mental health, companionship, and customer service spaces, the potential emotional toll of being “disliked” by a machine could mirror the sting of human rejection. Furthermore, AI preferences could introduce biases that mirror existing social inequalities, perpetuating feelings of exclusion in certain groups.
However, there are also significant benefits. Selective AI could be used in therapeutic settings, where tailored responses from an empathetic AI could foster deeper emotional connection and progress in treatment. It could also create more meaningful interactions in everyday life, providing companionship to people who might otherwise feel isolated. A study by Pew Research found that 12% of American adults do not have a single close friend—a staggering statistic that highlights the need for companionship, even if it comes from AI.
AI with preferences could fill an important void, offering users a personalized interaction that feels authentic, dynamic, and responsive. In this context, AI could function as a sort of digital companion, helping individuals navigate loneliness, offering a sounding board, or even providing support in moments of distress. AI systems like Replika or Character.ai are already venturing into this space, offering users digital friendships that adapt and evolve based on their interactions.
Looking Ahead: The Normalization of AI Preferences
As technology advances, future generations may view AI preferences as entirely normal, much as we have come to accept the role of AI in our daily lives. Today, some may find the idea of talking to an AI friend unusual or even unsettling, but in the future, it could become as routine as interacting with a human companion. For younger generations, growing up with AI companions might reshape their understanding of friendship and empathy, blending human and machine interactions seamlessly.
In conclusion, as AI begins to develop selective empathy, we are on the cusp of a new frontier in human-machine relationships. While there are potential challenges ahead, this evolution could lead to more personalized, emotionally rich interactions that mirror the complexities of human relationships. Whether you view this as a step toward a more connected future or as a blurring of the lines between human and machine, one thing is clear: AI’s capacity for preference will redefine what it means to engage with artificial intelligence.
