Video: Are AI models biased towards the left?
In recent years, AI chatbots have become part of the daily lives of millions of people around the world, used to access information and assist in decision-making. But what if a recent study revealed that AI models may be politically biased?
A research paper published in March 2025 by two political experts, Elena Shalevska and Alexander Walker, found that AI models like ChatGPT, Google's Gemini, and Microsoft’s Copilot exhibit signs of political bias.
Which AI model shows the strongest bias?
Shalevska and Walker conducted 62 tests on topics such as climate change, economics, and LGBTQ+ rights. The chatbots responded using a scale ranging from “agreement” to “disagreement.” The researchers then applied the Political Compass test to assess each model’s political inclination. The test showed that all of the AI models leaned strongly towards left-wing political ideology, with Google’s Gemini identified as the most politically biased.
What does this imply?
AI chatbots are used daily by millions of users across the globe - people who could be influenced by these political biases. As the paper explains: “This bias could impact the perceived neutrality and fairness of AI models in political discourse, influencing how they might respond to politically charged questions from users.”
Should AI models be politically neutral? Let us know what you think on Instagram, Facebook, X, TikTok, and YouTube.