top of page

The politics of AI and its issues

  • Caitríona Wright
  • Dec 10, 2024
  • 4 min read

As we’ve all seen in the last year or two, AI is growing in its ability and strength everyday- with it being more widely available for all of us in recent years. AI is a very important issue because of how deeply its beginning to root itself in all areas of society. At the time of writing this article, chat GPT has over 180 million users, with more and more joining each day. Social media platforms like Instagram and Snapchat have AI help bots, and search engines like Google have ‘Gemini’- its own AI search feature. AI is slowly seeping into all areas of life, creating a difficult situation for lawmakers, and members of the public- it can be difficult to determine what is real and what is fake- especially with the rate at which AI is learning and evolving. Artificial intelligence is used within policing and crime security, to campaign activities and smear campaigns, and even to the violating use of deepfakes, often used against women to sexually demean them and try and undermine their careers. As the use of AI significantly increases, so do the risks posed by the software.


For obvious reasons, there is a lot of scepticism around AI, as well as its ethics and how trustworthy it really is to be using and relied on. With this being said, many political and scientific journals have been reading into how AI is being used within politics and its implications for everyday life, with research being based on facts and recordable evidence.


The rise of deepfake pornography and its effects

Deepfake pornography has been a significant issue within politics in recent years- specifically for women. It’s been found that 96% of deepfake videos online are of pornographic nature. This violation of privacy has meant that porn made with ‘real people’ has become more accessible, and therefore means that people believe they have a right to view a person’s body in that way; creating even more of an expectation for boys and young men that women are there for when they want them- to be used only as sexual objects. This has been significantly recorded in South Korea, where deepfake porn has become an epidemic in schools and universities. The app Telegram, has been used to threaten girls, saying that their private pictures and videos had been leaked. Students photos were being used and turned into deepfake porn using AI, with more than 500 schools and universities being identified as targets of these telegram chatrooms. The implications of this are much bigger than just teenage boys who want to watch a new kind of porn, the autonomy over your own body is dependent on the amount of power a person has, and therefore who has given them that power. With the continuing use of this form of pornography, power will be routinely taken away from women, and the violation of privacy will become as regular as sexual harassment or catcalling.


As the use of AI pornography is still a relatively new issue, legislation surrounding it has trouble keeping up with its developments and therefore keeping it in line. Despite South Korea criminalising the consumption of deepfake porn as well as stricter penalties for the production, distribution and viewing of it, there are still issues with tackling the spread of it, due to it being hard to trace the creator of the content. There are also currently no federal laws in the US regulating AI or deepfakes, allowing women to be victimised and made even more vulnerable, meaning that women are slowly losing any autonomy they had over their bodies.


Deepfakes within political campaigns

Deepfakes are also used often to discredit political competitors, making it seem like they said things they didn’t, to attract undecided voters to opposing parties. This was used a lot during the 2024 US election; with Elon Musk sharing a deepfake of Kamala Harris saying that she was a ‘diversity hire’ as well as saying that Joe Biden ‘exposed his senility at the debate’. This is very dangerous, especially in the setting of an election, as voters may not be informed enough to investigate the credibility of what they’re watching or consuming. This is considered by political scientists as the third face of power, or preference shaping- before a decision is even made, someone has already gotten into their head, making one option seem more attractive than the other- this is often the most hidden form of power, and can be difficult to spot. Deepfakes, specifically speeches and videos, can be seen as an aspect of the third face of power, as it can significantly influence the way voters perceive politicians, and thus the way they vote.


AI used in policing

In recent years, policing has changed to keep up with technological advancements- including AI. AI and facial recognition software has been used more and more in CCTV, easing pressure on regular policing.  However, as usual with much of AI, there are many issues and concerns. One main issue is that AI struggles to differentiate between black faces. Clearly, this is a significant issue, and only worsens racial tensions between law enforcement and ethnic minorities. This may stir up depoliticization, especially within marginalised communities who feel they aren’t being heard or protected by politicians and lawmakers, meaning that they won’t feel willing to take part in democratic processes like voting or paying attention to elections. This is a significant impact of such a small aspect of policing. Depoliticization can lead to a radically different society than what we want- with the public, the people that politics is meant to represent, being shoved to the side and replaced by companies who care more about money and contracted work. In a similar way, AI used in policing can also make the government less accountable, as it’s a lot easier to blame mistakes like ‘identifying the wrong black man’ on a computer or robot, rather than a real person who has someone, or a whole ethnic group, to answer to.


What is the next step?

It’s clear that there must be some sort of regulation on AI, and its use in all aspects of society. Regulating the production and sales of deepfakes is a good start to take, but with the level at which AI is growing and evolving, lawmakers must do better to catch up and get ahead of this new technology- to keep up safe and avoid total societal collapse.

Follow us on Instagram @ypolitics_

© 2024 by yPolitics

All views expressed in articles are that of the author solely and do not represent the views of other authors or yPolitics.

bottom of page