Ibrahim Kraria from the No Hate Speech Network, Source: Ibrahim Kraria

Ibrahim Kraria: While AI can play a role in combating hate speech, it can also amplify its spread

Ibrahim Kraria: While AI can play a role in combating hate speech, it can also amplify its spread

An interview with a member of the No Hate Speech Network team

The No Hate Speech Movement, known for its tireless fight against hate speech and its promotion of human rights through Human Rights Education, has been instrumental in advocating for a more inclusive and tolerant society. The movement, which has transitioned into the No Hate Speech Network, seeks to utilize Human Rights Education to combat hate speech both online and offline.

Recently, 12-17 June, The Network held a study session in Budapest (Hungary), which offered a unique opportunity for participants to try out different social media platforms, images and texts AI generators. The session drew on the Council of Europe's Ad Hoc Committee on Artificial Intelligence (CAHAI)'s work, specifically focusing on the "Possible elements of a legal framework on artificial intelligence based on the Council of Europe's standards on human rights, democracy, and the rule of law."

What drives you to organize such projects and raise awareness about hate speech, particularly in the context of advancements in AI technology?

What drives me to organize projects and raise awareness about hate speech, particularly in the context of advancements in AI technology, is the urgent need to address the harmful impact of hate speech on individuals and society as a whole.

Hate speech has the potential to undermine democracy, social cohesion, and human rights. The increasing integration of AI technology into our daily lives brings both opportunities and challenges. While AI can play a role in combating hate speech, it can also perpetuate or amplify its spread. Witnessing the power of AI and the alarming answers it provided during a lecture, including instances of hate speech, was a catalyst for me to take action.

This realization compelled me to initiate this project with the aim of harnessing the potential of AI to counter hate speech and promote a more inclusive and respectful online environment.

Can you explain some of the strategies you’ve employed to raise awareness about the link between AI and hate speech?

Certainly! In our efforts to raise awareness about the impact of AI on hate speech, we employed various strategies and techniques. One of the key initiatives was organizing the study session, which served as a platform to empower participants from the No Hate Speech network. Through this session, we encouraged open discussions and idea-sharing to explore and address the intersection of AI and hate speech.

By engaging activists and providing them with a deeper understanding of AI's role, we aimed to equip them with the knowledge and tools to effectively combat hate speech in the digital realm. Additionally, we utilized social media platforms, newsletters, and collaborations with partner organizations to disseminate information and create a wider reach for raising awareness about the influence of AI on hate speech.

How do you see the societal impact of hate speech in the age of AI? Are there any specific consequences or implications that concern you?

The societal impact of hate speech in the age of AI is a significant concern. As a member of a minority group, I have witnessed firsthand the discriminatory use of AI due to the underlying data it relies on, which often contains a substantial amount of hate speech.

AI algorithms, when trained on biased or hateful data, can perpetuate and amplify discriminatory practices, leading to the marginalization and exclusion of certain groups. This has far-reaching consequences, including reinforcing existing societal inequalities, fostering hostility, and eroding trust in digital platforms.

It is crucial to address these implications and ensure that AI technologies are developed and deployed in a way that upholds ethical standards, combats hate speech and promotes inclusivity and respect for all individuals and communities.

What are your thoughts on the ethical responsibilities of individuals and organizations involved in AI development and deployment? Do you believe there should be any limitations or regulations?

The ethical responsibilities of individuals and organizations involved in AI development and deployment are of utmost importance. As AI technology continues to advance, it becomes crucial to address the potential risks and consequences it poses.

Individuals and organizations involved in AI have a responsibility to prioritize ethical considerations, such as fairness, transparency, accountability, and privacy. They should ensure that AI systems are developed and deployed in a manner that respects fundamental human rights and values.

Regarding limitations and regulations, I believe they are necessary to ensure the responsible and ethical use of AI. Clear guidelines and regulations can help prevent the misuse of AI technology, including the spread of hate speech.

It is essential to strike a balance that allows for innovation and progress while safeguarding against potential harm. Collaboration between policymakers, industry experts, researchers, and civil society is crucial in establishing frameworks that promote the responsible development and deployment of AI while protecting individuals' rights and fostering a more inclusive and equitable society.

What do you believe are the potential dangers or risks associated with the uncontrolled spread of hate speech in the era of AI? Are there any scenarios that particularly worry you?

The uncontrolled spread of hate speech in the era of AI poses significant dangers and risks. AI algorithms can amplify the dissemination of hate speech by accelerating its reach and targeting vulnerable individuals or communities. This can lead to the normalization of hate speech, further dividing societies and fueling discrimination and intolerance.

One particular scenario that worries me is the potential for AI to generate and personalize hate speech, tailoring it to specific individuals or groups. This could result in highly targeted and malicious campaigns, amplifying the harm caused by hate speech and undermining social cohesion.

Additionally, the automated nature of AI systems can lead to rapid and widespread dissemination of hate speech, making it difficult to counteract and mitigate its impact. These risks highlight the urgent need for responsible AI development and robust mechanisms to detect, mitigate, and combat hate speech in the digital space.

Can you share any success stories or notable achievements your project has had in countering hate speech? How do you measure the impact of your efforts?

Certainly! Our study session on AI and hate speech has had some notable achievements in countering hate speech. One success story is the development of nine project proposals by participants, including national AI trainings and campaigns aimed at combating hate speech. These initiatives demonstrate the tangible impact of the study session in empowering activists to take concrete actions.

To measure the impact of our efforts, we employ several methods. First, we assess the implementation and progress of the project proposals developed during the study session. Tracking the outcomes and reach of these projects provides valuable insights into their effectiveness in countering hate speech.

Additionally, we conduct pre- and post-session evaluations to gauge participants' knowledge, skills, and attitudes related to AI and hate speech. This helps us measure the growth and learning experienced by the participants. We also gather feedback and testimonials from participants, reflecting on their personal experiences and the impact the study session had on their work. These measures collectively allow us to assess and demonstrate the success and impact of our project in countering hate speech.

Are there any challenges or limitations you have encountered in your project's mission to combat hate speech? How do you adapt and overcome these challenges?

In our mission to combat hate speech, we have encountered several challenges and limitations. One significant challenge is the availability of resources, including funding and manpower. As with any project, limited resources can hinder the scope and scale of our initiatives. Additionally, relying on volunteers can sometimes present challenges in terms of availability and coordination.

Another challenge we have faced is addressing the impact of existing hate speech that has already spread. Correcting misinformation or undoing the harm caused by hate speech is a complex task that requires careful and thoughtful strategies.

To adapt and overcome these challenges, we employ various approaches. We actively seek partnerships and collaborations with organizations and individuals who share our mission, as this helps to leverage resources and expertise. We also explore grant opportunities and seek funding to support our initiatives.

Additionally, we continually assess and adjust our strategies based on feedback and evaluation to ensure effectiveness and efficiency. Flexibility and open communication within our team and with our stakeholders play a crucial role in navigating challenges and finding innovative solutions. Despite the limitations, we remain committed to our mission and continuously strive to adapt and overcome challenges in our efforts to combat hate speech.

How do you envision the evolution of strategies to combat hate speech in the coming years?

Looking ahead, our aspirations and goals for the future of our project are centred around the continued fight against hate speech and the advancement of strategies to combat it. We envision a future where hate speech is significantly reduced, and individuals and communities can engage in online spaces without fear of discrimination or harm.

To achieve this, we aim to expand our reach and impact by scaling up our initiatives and collaborating with a broader network of organizations and individuals. We strive to develop more comprehensive and tailored approaches that address the evolving nature of hate speech, including its intersection with advancements in technology such as AI. This entails staying updated with emerging trends and adapting our strategies accordingly.

We also emphasize the importance of education and awareness-raising as proactive measures to prevent hate speech. By promoting media literacy, digital citizenship, and critical thinking skills, we can empower individuals to become responsible and empathetic digital citizens.

Furthermore, we envision the future evolution of strategies to combat hate speech to involve a multidimensional approach. This includes fostering collaboration between tech companies, policymakers, civil society organizations, and academia to develop and implement robust policies, regulations, and technologies that mitigate the spread of hate speech while preserving freedom of expression.

Ultimately, our goal is to create an inclusive and respectful online environment where hate speech is effectively countered, and individuals can freely express themselves without fear of hate or discrimination.

This article is part of Read Twice – an EU-funded project, coordinated by Euro Advance Association that targets young people and aims to counter disinformation and fake news by enhancing their skills to assess critically information, identify vicious and harmful media content and distinguish between facts and opinions, thus improving their media literacy competences.

The contents of this publication are the sole responsibility of its author and do not necessarily reflect the opinion of the European Union nor of TheMayor.EU.



Growing City


Smart City


Green City


Social City


New European Bauhaus




ECP 2021 Winner TheMayorEU