Can AI Understand Bias in Politics?
“Illegals.” “Cultural invasion.” “Law and order.” These are not empty words, they are means of politics- rhetoric knives sharp enough to separate, rally or distract. Politicians do not use the words...
“Illegals.” “Cultural invasion.” “Law and order.” These are not empty words, they are means of politics- rhetoric knives sharp enough to separate, rally or distract. Politicians do not use the words accidently. They indicate ideology, intent and bias as well. But can computers, without any interest in the elections and emotions, distinguish what we cannot see (or do not want to see)? The use of artificial intelligence (AI) is becoming common to identify the bias in political speech. In a democracy where misinformation and polarisation is the norm, this technology promises a new filter, one that can cut not only through what is said, but how it is said. The promise here is to make us listen and not speak louder.
What Is “Text-as-Data”?
Text-as-data is the term used to describe the process of designing language into its structured form that machines can interpret after converting speeches, tweets and interviews into structured data. Political text is scanned by algorithms and patterns related to recurring subjects (“security,” “freedom”) and recognisable emotional tone (fear, hope, anger)) to the use of certain framing patterns (who represents the horror?). This enables researchers to identify minor indicators of messages which are ignored by humans. An example is the landmark study on congressional speech in the United States conducted by Gentzkow et al. (2019) which indicates that the partisan language drastically changed after 1994 when the Republicans began rhetoric politics as Newt Gingrich promoted the “Contract with America”. With AI-based approaches, they could quantify how using terms such as death tax (versus estate tax) defines dividing lines, even when discussing the same subject. Such tools are providing new ways to understand political polarisation not by monitoring political voting patterns but by carving into language itself.
Spotting Bias Isn’t Just for Fact-Checkers
Political language prejudice extends way further than misguiding. It inhabits framing, tone, and cutting out. The use of words such as “anchor baby” or “radical Islamist” demonstrates subconscious bias without being factually incorrect. The frequent employment of the use of “we” versus “they” repeats the in-group identification and out-group fear. These framing effects can be identified using AI systems and natural language processing (NLP) in their specificity. They examine tone (is it hostile or optimistic?), emotion (is there fear?), and repetition (how many times are people maligned as a group?) (Orellana and Bisgin, 2023; Yu, 2023). According to the article in PsyPost, the researchers created a model to reveal the hidden ideological structures in news materials online and how even neutral-looking wordings become ideologically green on red in certain contexts and frequency (Dolan, 2025). However, AI is not just a word counting program. It evaluates the weight of words. Biasedness in politics resides there.
Case Study- South Asia- India vs. Pakistan
Artificial intelligence provides an informative insight into the contrasting emotional political languages of India and Pakistan, two other countries with common histories but different political stories. Jafri et al. (2024) studied more than 11,000 Hindi tweets of recent elections in India, where the CHUNAV study revealed a worrying trend of prevalence of hate speech, especially among community groups during the campaign season. The corpus indicates how political discourse tends to fall on the belligerent side of assertive, political exploitation of a community, primarily, the religious minorities. AI models trained with this information could identify tone shifts, hate targets (people, organisations, or groups), and the measurement of inflammatory trends over time. In Pakistan, the linguistic study of the speeches of Imran Khan demonstrates a different tactic, instead of explicit attacks, Khan often uses Neuro-Linguistic Programming (NLP) strategies, including presupposition, cause and effect argument structure and mind reading to frame his politics in a subliminal manner (Aneeza, 2024). These instruments facilitate the building of a convincing, emotional attractive narrative with a focus on national grievances, Islamic identity, and collective victimhood, especially at the UNGA. An AI model that examined the rhetoric of both countries would most probably show that Indian electoral discourse would have a greater amount of direct communal identification, compared to more psychological induction, and references to religion in the Pakistani speeches. The two styles perform an identity manipulation, though at different levels. AI assists in making such rhetorical finger prints visible.
Promise and Pitfalls
The potential of AI in the study of political speech is in their scale, objectivity, and rates. Many speeches can be processed by AI within a few minutes, suspicious patterns can be identified and neutral statistical analysis can be provided. There are projects such as Hatebase, which apply AI to track hate speech worldwide and have 2,300 tracked terms in 90-plus languages (Boyd, 2022). It will allow identifying patterns of radicalisation or political crises in advance. However, it does not work 100 percent. The AI may acquire the biases of its training data. It could interpret local cultural references inaccurately in the case that it is trained to understand overly-Western media (Rizve, 2024; Baum and Villasenor, 2023). Besides, irony, sarcasm, and metaphor can often cause machines to stumble. The intended meaning of a politician may not match with the meaning of AI. Algorithmic opacity is another complication. Deep-learning models are commonly viewed as black boxes, where it can inform you about what it discovers, but not necessarily how (Hassija et al., 2023). This leaves loopholes of accountability in sensitive situations such as elections. Finally, AI should be monitored by a human being. A news article focused on ensuring that after the AI generates the content, it should be passed through a human editor, who understands the languages better as well as the context, and followed up by a peer review to address the cultural context and sensitivities (Rizve, 2024). To illustrate, a machine can consider the word Zionist as a hate speech in one post and not in the other, based on fine subtleties that the algorithms are not always able to detect.
Why It Matters
Political bias is a key concept to a governable democracy. When the election coverage is peppered with manipulation the electorate votes on perceptions rather than fact. This manipulation can be unmasked by AI tools early on, be it the increase of polarising language around the time of elections or demonization of minority groups repeated. Consider a political party gradually introducing more and more use of the language of fear over time- without violating any statute secretly. AI will be able to identify such a trajectory and inform watchdog organisations or the media. Likewise, AI-informed tools enable those in a policymaking role to determine whether fact or rhetoric is driving the flow of public opinion. However, there are dangers of abusing AI. When it is applied to drum out opposition or censor speech excessively, it may turn counterproductive, particularly in authoritarian settings. Therefore, it has to be used along with a democratic standard, transparency, and the ability to cast a critical scrutiny.
Conclusion
AI is not a political seer, and is not a digital jurist. It cannot and must not substitute the human insight in judging political speech. However, it possesses the capability of exposing patterns, of magnifying voices, that we would otherwise neglect, and even of encouraging us to pause, and wonder how our languages shape our worlds. Thus, in an age of noisy politics, AI can help us listen smarter.
References
Aneeza, D.A.H.B.S., 2024. Neuro-Linguistic Programming Analysis of Imran Khan’s Political Discourse: A Study of his Speeches. Journal of Applied Linguistics and TESOL (JALT), 7(4), pp.199-221.
Baum, J. and Villasenor, J., 2023. The politics of AI: ChatGPT and political bias. [online] Brookings. Available at: https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/.
Boyd, D., 2022. Research summary document: hatebase-AI for hate speech monitoring.
Dolan, E.W., 2025. Groundbreaking AI model uncovers hidden patterns of political bias in online news. [online] PsyPost – Psychology News. Available at: https://www.psypost.org/groundbreaking-ai-model-uncovers-hidden-patterns-of-political-bias-in-online-news/ [Accessed 25 Jun. 2025].
Gentzkow, M., Shapiro, J.M. and Taddy, M., 2019. Measuring group differences in high‐dimensional choices: method and application to congressional speech. Econometrica, 87(4), pp.1307-1340.
Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M. and Hussain, A., 2024. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1), pp.45-74.
Jafri, F.A., Rauniyar, K., Thapa, S., Siddiqui, M.A., Khushi, M. and Naseem, U., 2024. Chunav: Analyzing hindi hate speech and targeted groups in indian election discourse. ACM Transactions on Asian and Low-Resource Language Information Processing.
Orellana, S. and Bisgin, H., 2023. Using natural language processing to analyze political party manifestos from New Zealand. Information, 14(3), p.152.
Rizve, S., 2024. AI and India’s General Elections. [online] thediplomat.com. Available at: https://thediplomat.com/2024/04/ai-and-indias-general-elections/.
Yu, Q., 2023. Towards a more in-depth detection of political framing.

