Even if the robots don't rise up, they already have the potential to do harm, Source: Depositphotos

The dark side of Artificial Intelligence

The dark side of Artificial Intelligence

Concerns over AI's racism issue and its stereotyping behaviour

In the midst of the rapid advancement of artificial intelligence (AI), a wave of concerns has emerged regarding its potential negative impacts. From racial biases to stereotypical behaviour, AI's automatic decision-making abilities are drawing criticism from various sectors of society.

One of the major issues surrounding AI is its inclination to perpetuate racism. AI systems are trained on biased datasets, therefore, have been found to exhibit discriminatory behaviours. Algorithms designed to make autonomous decisions are inadvertently reproducing societal biases, leading to unfair outcomes, such as racial profiling and discrimination in various fields, including law enforcement and hiring processes.

Furthermore, the stereotypical nature of AI is also a matter of concern. Some AI systems tend to reinforce societal stereotypes, leading to the perpetuation of harmful biases and generalizations. This can have far-reaching consequences, affecting the lives of individuals and further exacerbating societal inequalities.

Here’s an example of how AI can perpetuate stereotypes. Discord, a communication platform, enables users to create communities and engage in real-time voice, video, and text conversations. It now has the option of generating any picture that you want, for only 4 euros a month.

The problem with this is that the pictures generated are highly stereotypical and discriminatory. For instance, using the words 'Irish' or 'Scottish' when asking it to generate any portrait, would always generate a ginger person. Even when asked to generate other hair colours, the bot will still produce ginger-like persons.

Even worse, when using the words relating to the Islamic world such as jihad, Muslim or just traditional Muslim names the machine generates stereotypical and discriminatory images.

The emotional deficit of the AI systems

Another glaring limitation of AI is its inability to imitate or recognize human emotions accurately. This deficit poses challenges in fields such as mental health care and customer service, where emotional intelligence plays a crucial role. Despite advancements in natural language processing and sentiment analysis, AI still struggles to comprehend and respond appropriately to human emotions, leaving a significant gap in its capabilities.

Democracy and accountability issues of AI

Governments worldwide are increasingly relying on automatic decision-making processes driven by AI. While this promises efficiency and streamlining of public services, critics argue that it could induce laziness and make people unproductive. As governments hand over decision-making to AI, there is a fear that human accountability and critical thinking may diminish, leading to potential misuse of power and erosion of democratic values.

When it comes to propaganda, AI can be used in many ways. One of the best examples of this is fake news and the radicalization of young people to join far-right extremist movements around the world.

The radicalization of AI-powered systems also poses a significant risk. The same technology that has the potential to revolutionize industries and enhance human lives can be exploited by malicious actors to spread disinformation, manipulate public opinion, and incite violence. Safeguarding against the weaponization of AI and ensuring its responsible use are critical challenges for society to address collectively.

Can we regulate and control AI?

Nowadays AI is rapidly entering our everyday lives; It can even unlock our phones. This is extremely important since it means that the development of AI cannot be contained, its proliferation is bound to bleed across between the civilian, business and military spheres. AI is thus more than a technology – it is an enabler.

It is extremely difficult for the world to find a way to avoid all of this. Moreover, there is no way to stop this technology from evolving. The only recourse left is to find the means to keep it under control.

The European Union has taken notice of the potential dangers of unchecked AI implementation. The European Commission (EC) unveiled a legal framework for AI, the Artificial Intelligence Act (AI Act), the first of its kind in the World.

The AI Act aims to implement a system which people will be able to follow while using AI-based technology. These EU regulations seek to ensure transparency, accountability, and ethical AI practices while safeguarding individuals' privacy and data rights. These regulations aim to strike a balance between innovation and protecting the rights and well-being of individuals in an increasingly digital world.

In conclusion, as AI continues to advance, it is crucial to address its structural deficiencies that are inherited from its human inventors, such as racism, stereotyping and automatic decision-making concerns.

The younger generation's apprehension about their future job prospects and the ethical implications of AI's limitations on imitating human emotions also need to be acknowledged. With EU regulations and campaigns against autonomous weapons gaining momentum, it is evident that global society is grappling with the ethical and practical challenges posed by the rise of AI.

This article is part of Read Twice – an EU-funded project, coordinated by Euro Advance Association that targets young people and aims to counter disinformation and fake news by enhancing their skills to assess critically information, identify vicious and harmful media content and distinguish between facts and opinions, thus improving their media literacy competences.

The contents of this publication are the sole responsibility of its author and do not necessarily reflect the opinion of the European Union nor of TheMayor.EU.



Growing City


Smart City


Green City


Social City


New European Bauhaus




ECP 2021 Winner TheMayorEU