Every year on February 12, the world pauses to observe the International Day for the Prevention of Violent Extremism as and when Conducive to Terrorism — commonly known as International PVE Day. Declared by the United Nations General Assembly under Resolution 77/243, this date carries a weight that stretches far beyond ceremony. It is a global call to action. A reminder that violent extremism does not arise in a vacuum. And in 2026, the digital battlefield where much of that extremism takes root — social media — is also the place where some of the most creative resistance is being born.
This year marks the fourth commemoration of International PVE Day. Since its first observance in 2023, the landscape has shifted dramatically. Artificial intelligence now drives both the spread and the suppression of harmful content. Youth-led counter-narrative campaigns have moved from niche experiments to mainstream strategy. Regulatory frameworks like the EU Digital Services Act and the EU Terrorist Content Online Regulation have begun reshaping how platforms operate across borders.
But the core question remains: Can social media — the very tool that extremists exploit to recruit, radicalize, and mobilize — be turned into a force for peace?
This guide explores that question from every angle. It draws on the latest research, real-world case studies, and insights from global institutions to paint a full picture of where we stand in 2026 — and where we need to go.
What Is International PVE Day and Why Does It Matter in 2026?
The International Day for the Prevention of Violent Extremism was established to strengthen the international community’s commitment to addressing the root conditions that lead to terrorism. The United Nations Office of Counter-Terrorism (UNOCT) has described violent extremism as “an affront to the purposes and principles of the United Nations,” one that “undermines peace and security, human rights and sustainable development.”
The date — February 12 — holds special meaning. It was on February 12, 2016, that the UN General Assembly formally debated the Secretary-General’s Plan of Action to Prevent Violent Extremism, a landmark document containing more than 70 recommendations for member states and the UN system. That plan called for a comprehensive approach — one that went beyond security-based counter-terrorism measures to address the underlying grievances, injustices, and broken systems that push individuals toward radicalization.
In 2026, PVE Day arrives at a critical juncture. Several developments have reshaped the conversation:
| Development | Impact |
|---|---|
| GIFCT expansion to 39 member companies | Broader cross-platform cooperation on terrorist content removal |
| EU Digital Services Act full enforcement | Platforms face fines of up to 6% of global turnover for non-compliance |
| Rise of AI-generated extremist propaganda | Deepfakes and synthetic media create new detection challenges |
| Youth radicalization accelerating online | Cases appearing at younger ages and shorter timeframes |
| Christchurch Call Foundation launch | Transition to an independent NGO funded by tech companies |
The resolution establishing PVE Day emphasized the role of civil society, academia, religious leaders, and the media in countering terrorism. Importantly, it reaffirmed that terrorism and violent extremism “cannot and should not be associated with any religion, nationality, civilization or ethnic group.” This principle remains central to every credible prevention effort.
How Violent Extremists Exploit Social Media Platforms for Recruitment
Before we can understand how social media counters extremism, we must first understand how extremists use it. The mechanisms are well-documented, and they have only grown more sophisticated.
According to the Observer Research Foundation (ORF), extremist propaganda on social media operates on a foundation of emotional and psychological manipulation. Extremist narratives tap into deep-seated grievances. They cultivate anger and resentment. They frame certain groups or institutions as oppressors. Over time, repeated exposure to such content desensitizes individuals to violence and normalizes it as a legitimate response.
The ORF brief from March 2025 notes that as of 2024, there were roughly 5.35 billion internet users worldwide, each generating enormous volumes of data. Facebook alone produced approximately 4,000 terabytes of data daily. Within that ocean of information, extremist content finds its audience with alarming precision.
Different platforms serve different functions for extremist groups. Facebook acts as a decentralized hub for sharing information. X (formerly Twitter) allows for rapid communication with global audiences. YouTube remains the preferred platform for video propaganda tailored to specific cultural and linguistic audiences. Encrypted messaging applications like Telegram serve as recruitment pipelines once initial contact has been made on more public platforms.
The U.S. Government Accountability Office (GAO) has documented how domestic violent extremists use social media and gaming platforms to reach wide audiences, insert extremist ideas into the mainstream, and radicalize, recruit, and mobilize supporters. Their 2024 report recommended that both the FBI and DHS develop formal strategies for sharing threat-related information with social media and gaming companies — an acknowledgment that the government’s approach had been ad hoc rather than systematic.
Generative AI has added a new dimension to this threat. The RAND Corporation’s 2025 testimony to the UK Home Affairs Committee described how AI, large language models, and deepfakes have “transformed the extremist landscape.” Extremists can now use AI-powered algorithms to identify individuals sympathetic to their ideology. Natural language processing tools generate content that appears authentic. And synthetic media — including AI-generated news anchors delivering ISIS propaganda — makes detection far more difficult.
The GIFCT’s 2025 Annual Member Forum highlighted an especially troubling trend: the radicalization of young people is accelerating, with cases appearing at increasingly younger ages and within shorter timeframes compared to previous waves. Some individuals are radicalized in mere weeks rather than months or years. Subcultures fixated on extreme violence and gore — often detached from any coherent ideology — are targeting youth through “gamified” aesthetics, creating what experts call a “post-ideological” fascination with destruction.
How the Global Internet Forum to Counter Terrorism Removes Extremist Content Online
The Global Internet Forum to Counter Terrorism (GIFCT) stands at the center of the tech industry’s response to online extremism. Founded in 2017 by Facebook, Microsoft, Twitter, and YouTube, it has grown into an independent non-governmental organization with 39 member companies as of 2025.
GIFCT’s work rests on three pillars: delivering critical information to prevent terrorist activity online, empowering a broad range of stakeholders, and advancing shared research and understanding of the threat landscape. Its most well-known tool is its hash-sharing database, which contains approximately 390,000 distinct pieces of terrorist or violent extremist content in the form of perceptual hashes. When a member company identifies and removes a piece of terrorist content, a digital fingerprint (hash) of that content is shared through the database. Other member companies can then use that fingerprint to detect and remove the same material on their own platforms.
In 2025 alone, GIFCT responded to over 175 developing terrorist or mass violence events or significant online terrorist developments. The organization activated its formal Incident Response Framework on two occasions — a protocol designed for rapidly escalating cross-platform coordination during major events.
A significant milestone came in June 2025, when GIFCT and the UN Counter-Terrorism Committee Executive Directorate (CTED) signed a Memorandum of Understanding formalizing their cooperation. This agreement covers areas including the misuse of information and communications technologies for terrorist purposes, recruitment, incitement, and the financing and planning of terrorist activities.
Beyond hash-sharing, GIFCT also operates the Global Network on Extremism and Technology (GNET), its academic research arm, which is led by the International Centre for the Study of Radicalisation at King’s College London. GNET produces targeted research that informs both platform policies and government strategies.
The Hasher Matcher Actioner (HMA) tool, developed by Meta and made publicly available, represents another important development. This open-source tool allows smaller platforms — those that lack the resources of major tech companies — to build their own databases of hashed terrorist content and participate in the broader removal ecosystem. As Tech Against Terrorism has noted, this kind of resource-sharing is essential because many smaller platforms have become targets for terrorist exploitation precisely because they lack sophisticated content moderation systems.
The Role of AI in Detecting and Removing Terrorist Content on Social Media
Artificial intelligence has become the backbone of content moderation at scale. The numbers tell the story clearly.
According to Meta’s Transparency Center, Facebook’s AI tools flag 99.3% of terrorist-related content before any human reports it. The platform removes 99.6% of terrorist-related video content through automated systems. These figures represent a level of proactive detection that would be impossible through human moderation alone.
The broader AI content moderation landscape is growing rapidly. The global market was valued at $2.6 billion in 2023 and is projected to reach $7.5 billion by 2030. AI moderation tools can review 10,000 times more content per hour than human moderators and reduce manual moderation workloads by up to 70% in organizations that fully integrate them.
The technology works through two primary approaches. The first is hash-matching, which compares new content against databases of previously identified terrorist material. The second is classification, which uses machine learning and natural language processing to analyze new content and predict whether it belongs to prohibited categories. These two approaches work together: hash-matching catches known content quickly, while classification algorithms identify new, previously unseen material.
However, significant challenges remain. AI struggles with context, cultural nuance, and linguistic complexity. A 2025 study by CREST (Centre for Research and Evidence on Security Threats) found that extremists actively develop evasion strategies. These include platform-switching (migrating between mainstream and fringe platforms), self-censorship to avoid detection triggers, censorship evasion education where banned users teach others how to bypass restrictions, and the use of coded language, dog-whistles, and symbolic imagery that automated systems struggle to detect.
The UNICRI and UNOCT joint report on Countering Terrorism Online with AI emphasizes that while AI is essential, it must be paired with human oversight. Algorithmic decisions about what constitutes terrorist content carry significant implications for free expression. False positives can silence legitimate political speech, journalistic reporting, and human rights documentation. The report recommends “appropriate and effective safeguards, in particular through human oversight and verification” to ensure accuracy.
The EU’s Terrorist Content Online Regulation (TCOR), which has been in force since June 2022, requires hosting service providers to remove terrorist content within one hour of receiving a removal order from a competent authority. Meeting this deadline at scale essentially requires AI-powered detection and removal systems.
How Counter-Narrative Campaigns on Social Media Prevent Radicalization
Removing terrorist content is only half the equation. The other half — arguably the more important half — involves filling the void with compelling alternative messages. This is the domain of counter-narrative and alternative narrative campaigns.
The distinction between the two matters. Counter-narratives directly challenge and deconstruct extremist messaging. They confront the logic, expose the contradictions, and discredit the claims of violent ideologies. Alternative narratives, by contrast, do not engage with extremist content directly. Instead, they tell a different story — one centered on shared values, community resilience, positive identity, and constructive engagement.
The Brookings Institution has noted that despite the popularity of counter-narrative programs, evidence for their effectiveness in isolation remains limited. Many programs lack rigorous evaluation, clear theories of change, or meaningful engagement with the communities they aim to influence. A European Parliament study found that “the concept itself is rather underdeveloped and lacks a thorough grounding in empirical research.”
Yet promising examples exist. In Indonesia, the national counterterrorism agency (BNPT) manages the Duta Damai (Ambassadors of Peace) program, which recruits hundreds of young people across the country to create positive content on social media. A 2025 study published in Frontiers in Communication examined the case of Sofyan Tsauri, a reformed former terrorist convict who independently built a YouTube-based P/CVE initiative. The study found that personal narratives from credible voices — particularly those with direct experience of extremism — carry particular weight with audiences who might otherwise be skeptical of government-produced messaging.
Indonesia’s Ministry of Communication and Information Technology handled 5,731 pieces of content related to radicalism, extremism, and terrorism in the digital space between July 2023 and March 2024. Meanwhile, civil society organizations like Nahdlatul Ulama — the world’s largest Muslim organization — have adopted media-based strategies to combat radicalization at scale.
In Kenya, the YADEN #insolidarity campaign provides capacity-building, tools, and platforms for youth to develop their own messages about how violent extremism has affected their communities. This approach centers local voices rather than imposing external messaging.
The PRECOBIAS project in Europe takes a different approach. Through social media campaigns, it aims to enhance “digital resilience and critical thinking” among radicalized and at-risk young people by helping them understand their own cognitive biases — the mental shortcuts that make extremist narratives appealing.
The National Institute of Justice has rated counter-narrative practices as “Effective” for reducing risk factors for violent radicalization. Techniques that show promise include counter-stereotypical exemplars (presenting information that challenges dominant stereotypes), persuasion through identification with diverse perspectives, and inoculation — warning individuals about manipulation attempts before they encounter them. The “Bad News” game, a pre-bunking educational tool, has reportedly reduced susceptibility to disinformation among participants.
What the Christchurch Call and International Pledges Mean for Online Safety
The Christchurch Call to Action emerged from tragedy. On March 15, 2019, a gunman killed 51 people in two mosques in Christchurch, New Zealand. He live-streamed the attack on Facebook. He had posted his manifesto on Twitter and the forum 8chan minutes before opening fire.
In the wake of that horror, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the Christchurch Call — a voluntary global pledge to eliminate terrorist and violent extremist content online while supporting a free, open, and secure internet.
Since its launch, the Call has expanded significantly. In 2024 alone, four new technology firms joined the community: Anthropic, Discord, OpenAI, and Vimeo. The addition of AI companies was particularly significant, acknowledging the growing role of artificial intelligence in both generating and detecting extremist content.
The Christchurch Call has also undergone a structural transformation. New Zealand transitioned the initiative into an independent non-governmental organization, now funded by contributions from major tech companies including Meta and Microsoft. This shift reflects a strategy to preserve the Call’s multistakeholder model — bringing together governments, tech companies, and civil society — without any single government dominating.
Key achievements of the Christchurch Call community include:
- The launch of the Christchurch Call Initiative on Algorithmic Outcomes (CCIAO), which has achieved proof of concept for third-party audits of proprietary algorithms while protecting commercially sensitive information.
- An updated Crisis Response Protocol for rapid coordination during terrorist attacks or mass violence events.
- The Elevate project, which engages the Call’s community to address the risk of terrorist exploitation of new technologies, with a particular focus on AI.
- Partnership with the ROOST consortium to intensify efforts to eliminate terrorist and violent extremist content online.
The Christchurch Call represents a fundamentally different approach from regulation. It is voluntary, collaborative, and built on trust between stakeholders who sometimes have competing interests. Its effectiveness depends on sustained political will and corporate commitment — neither of which can be taken for granted.
How EU Regulations Like the Digital Services Act Combat Online Extremism
While voluntary pledges like the Christchurch Call rely on goodwill, the European Union has taken a regulatory approach. Two key pieces of legislation now govern how platforms handle extremist content in the EU.
The Regulation on Terrorist Content Online (TCOR), in force since June 2022, gives competent authorities in EU member states the power to issue binding removal orders requiring platforms to remove terrorist content within one hour. It applies to all hosting service providers offering services in the EU, regardless of where they are based. The regulation also requires platforms that are particularly exposed to terrorist content to take proactive measures — using technical tools (including AI) to detect and remove such content before it spreads.
The Digital Services Act (DSA), which became fully applicable to all platforms in February 2024, creates a broader framework for platform accountability. It requires online platforms to implement transparent content moderation policies, provide mechanisms for users to report illegal content, and conduct risk assessments related to systemic risks — including the dissemination of illegal content and the impact on fundamental rights.
For Very Large Online Platforms (VLOPs) — those with more than 45 million monthly users in the EU — the obligations are stricter. They must conduct annual risk assessments, submit to independent audits, and provide data access to researchers. Non-compliance can result in fines of up to 6% of global annual turnover.
The DSA has already been tested. Following the Hamas attack on Israel in October 2023, the European Commission sent X (formerly Twitter) a request for information regarding the alleged spread of terrorist content, hate speech, and disinformation on the platform. Formal proceedings were opened in December 2023, and in late 2025, the EU served X with the first penalty under the DSA.
The regulatory picture is not without tension. The DSA has been caught up in geopolitical disputes between the EU and the United States. Platform owners have characterized the regulation as a “censorship tool,” and U.S. officials have questioned its compatibility with American free speech traditions. These tensions highlight the fundamental challenge of governing a global internet through regional laws.
Youth-Led Social Media Movements That Counter Violent Extremism Worldwide
Young people are both the primary targets of extremist recruitment and the most powerful agents of prevention. This paradox defines the current PVE landscape.
The UNICRI (United Nations Interregional Crime and Justice Research Institute) has emphasized that empowering youth through constructive engagement gives them a sense of belonging, leadership skills, mentorship, and connections across diverse backgrounds — exactly the protective factors that make individuals resilient to extremist recruitment.
Several youth-led initiatives stand out in 2026:
Indonesia’s Duta Damai network remains one of the most extensive programs. Hundreds of young people across the country serve as peace ambassadors, creating positive social media content that offers an alternative to extremist messaging. The program is coordinated by Indonesia’s National Counter-Terrorism Agency (BNPT) but relies on the creativity and authenticity of young participants.
The STRIVE Asia program — a partnership between the European Union and the United Nations — operates across several countries in Central and South Asia. In Uzbekistan, the program organized special courses for over 150 community leaders on preventing violent extremism and responding to early signs of radicalization. It also funds local NGO initiatives and has established advisory groups to represent community interests.
In West Africa, a region that accounts for approximately 50% of global terrorism-related deaths, GIFCT partnered with the International Institute for Justice and the Rule of Law to convene a multi-stakeholder workshop in Malta in April 2025. Representatives from Benin, Ghana, Kenya, Malta, Mauritania, Nigeria, Senegal, and Togo participated. A central theme was the critical role of community — both for counter-terrorism practitioners and, crucially, for terrorists themselves, who actively use community-building through digital spaces to recruit, radicalize, and retain individuals.
The P2P: Challenging Extremism initiative, originally launched by Facebook and the U.S. State Department (now a public-private partnership), supports college students around the globe in developing social media tools and campaigns to counter violent extremist narratives. While questions remain about whether such campaigns reach their intended audiences, the program has trained thousands of young people in strategic communication.
What these initiatives share is a recognition that credibility matters more than production quality. Government-produced counter-narratives are often viewed with suspicion by the very audiences they target. Messages from peers, community leaders, and especially former extremists carry far more weight.
How Credible Voices and Former Extremists Use Social Media to Prevent Radicalization
The concept of the “credible messenger” has become central to modern P/CVE strategy. Research consistently shows that individuals who have direct experience with extremism — and who have subsequently rejected it — possess a unique ability to connect with at-risk audiences.
The Frontiers in Communication study on Sofyan Tsauri offers a detailed example. Tsauri, a reformed former terrorist convict in Indonesia, independently established a YouTube channel dedicated to P/CVE efforts. Through personal storytelling, he shares his journey from radicalization to rehabilitation. His content reaches audiences that official government channels cannot.
The study’s authors recommend that governments and civil society organizations support former extremists in building their media capacity, enhancing their technical skills, and improving their content creation and messaging strategies. The combination of authentic personal narrative and professional-quality content can achieve significant societal impact.
This approach is not without challenges. Former extremists may face social stigma, security risks, and psychological strain from repeatedly revisiting their past. Institutional support — including mental health services, legal protection, and sustainable funding — is essential for these programs to work long-term.
Indonesia’s BNPT TV YouTube channel provides another model. With over 377 videos as of 2025, the channel features interviews with former terrorists and convicts who have abandoned extremist beliefs. These first-person accounts serve a dual purpose: they humanize the process of deradicalization for the general public, and they offer a path forward for individuals who may be questioning their involvement in extremist movements.
The Growing Challenge of AI-Generated Extremist Propaganda in 2026
As we approach PVE Day 2026, one challenge looms larger than any other: the weaponization of generative AI by extremist groups.
A CREST research analysis published in October 2025 documented several concerning trends. ISIS has deployed AI-generated news anchors to deliver propaganda — a development first reported by The Washington Post. Neo-Nazi and white supremacist groups globally are using AI to promote their messages, spread misinformation, and support their cause. After the Southport attack in the UK, AI-generated content was used to spread distrust and conspiracy narratives within hours.
The DHS 2025 Homeland Threat Assessment warned that social media, encrypted messaging apps, and generative AI tools have accelerated recruitment while reducing the visibility of threat indicators, making early detection more difficult.
Extremist groups are exploiting AI in several specific ways:
- Content creation: Natural language processing generates authentic-sounding propaganda at scale.
- Targeted recruitment: AI-powered algorithms analyze online data to identify individuals sympathetic to extremist ideology.
- Evasion of detection: AI tools help extremists circumvent content moderation filters through coded language, altered imagery, and platform-specific manipulation techniques.
- Deepfakes: Synthetic video and audio content can impersonate public figures, fabricate events, or create compelling but entirely fictional propaganda.
The CyberWell 2025 Annual Report on online antisemitism illustrates the detection challenge. While platform removal rates for antisemitic content improved modestly (from 50% in 2024 to 52.53% in 2025), automated moderation systems struggled to detect implicit narratives embedded in memes, short-form videos, animations, emojis, and symbolic imagery. AI-generated antisemitic content proved particularly difficult for existing tools to identify.
On the defensive side, AI also offers powerful tools. Red-team simulations stress-test counter-terrorism guardrails. Shared-hash moderation coalitions powered by GIFCT and Meta’s HMA tool synchronize detection of extremist media across platforms. Pre-bunking educational tools combined with browser-based AI fact-checkers have shown early promise in reducing susceptibility to disinformation.
The arms race between AI-powered extremism and AI-powered defense will define the next chapter of online counter-terrorism.
How Governments and Tech Companies Collaborate to Counter Online Radicalization
Effective prevention of violent extremism online requires collaboration that cuts across traditional boundaries. No single government, company, or civil society organization can address the challenge alone.
The UN Global Counter-Terrorism Coordination Compact brings together 45 entities across the UN system, along with Interpol and the World Customs Organization. Its Working Group on Preventing and Countering Violent Extremism coordinates system-wide efforts, including the observance of International PVE Day.
At the national level, National Plans of Action to Prevent Violent Extremism (NAPs) provide frameworks for coordinated responses. The UN’s Global Programme on PCVE offers technical support to member states in developing, implementing, and evaluating these plans. To date, dozens of countries have developed such plans, though the quality of implementation varies significantly.
The Tech Against Terrorism initiative, a partnership between technology companies, governments, and the UN Counter-Terrorism Committee Executive Directorate (CTED), provides a bridge between government priorities and industry capacity. It develops guidelines, shares lessons learned, and offers technical tools for content moderation. Its Europe-focused project, funded by the EU, has developed valuable resources for smaller tech companies that lack the moderation infrastructure of major platforms.
The GAO’s recommendation that the FBI and DHS develop formal strategies and goals for information-sharing with social media and gaming companies reflects a broader recognition that government engagement with the tech sector has been insufficiently systematic. Without clear strategies, agencies “may not be fully aware of how effective their communications are with companies, or how effectively their information-sharing mechanisms serve the agencies’ overall missions.”
Multi-stakeholder workshops have emerged as a key forum for building practical collaboration. The GIFCT-IIJ workshop in Malta in April 2025, focused on West Africa, brought together government representatives, civil society members, and private sector experts from across the continent. A central finding was that community trust is essential — without it, communities are not resilient enough to withstand extremist narratives. Governments must consider the impact their own actions have on that trust.
Practical Ways to Support the Prevention of Violent Extremism on Social Media
International PVE Day is not only for governments and tech companies. Individual social media users, educators, community leaders, and organizations all have roles to play.
For individual social media users: Report terrorist and extremist content when you encounter it. Every major platform provides reporting mechanisms, and user reports remain an important complement to automated detection. Engage critically with content that promotes division, dehumanization, or violence. Share positive, inclusive stories from your own community.
For educators and parents: Digital literacy is one of the most effective upstream prevention tools. Teaching young people to recognize manipulation tactics, understand algorithmic amplification, and evaluate sources critically builds resilience against radicalization. The PRECOBIAS project and similar initiatives offer curricula and resources that can be adapted for different educational contexts.
For community and religious leaders: Provide spaces — both online and offline — where young people can discuss grievances, ask difficult questions, and explore identity without fear of judgment. Research consistently shows that a sense of belonging and community connection are among the strongest protective factors against radicalization.
For tech companies and platform developers: Invest in content moderation systems that are context-aware, multilingual, and culturally sensitive. Share tools and resources with smaller platforms through open-source initiatives. Participate in cross-industry collaboration through organizations like GIFCT and Tech Against Terrorism. Ensure transparency in algorithmic decision-making.
For policymakers: Develop and fund comprehensive national action plans that address both online and offline drivers of radicalization. Support research into what works in P/CVE — and what does not. Protect civil liberties while enabling effective prevention. Fund grassroots civil society organizations that work directly with at-risk communities.
What to Expect on International PVE Day 2026: Events and Global Activities
The fourth commemoration of International PVE Day on February 12, 2026, will feature a series of events organized by the UN Office of Counter-Terrorism, the Permanent Mission of Iraq to the United Nations, and other partners.
Based on the pattern established in previous years, activities typically include:
- High-level events at UN Headquarters in New York, bringing together member states, UN system entities, civil society, religious leaders, the private sector, and academia.
- Side events organized by individual member states, often focusing on regional challenges and national strategies.
- Social media campaigns using the hashtags #PVEDay, #12February, and #CounterTerrorism to raise awareness and share prevention resources.
- Webinars and panel discussions organized by research institutions, civil society organizations, and intergovernmental bodies.
The 2026 commemoration is expected to focus particularly on the nexus between new technologies and violent extremism — reflecting the growing concern about AI-generated propaganda, youth radicalization through online subcultures, and the need for innovative, technology-informed prevention strategies.
You can follow developments through the official UNOCT channels and contribute to the global conversation by sharing stories of resilience, peace-building, and community strength from your own context.
The Future of Preventing Violent Extremism Through Social Media and Technology
Looking ahead, several trends will shape the future of PVE in digital spaces.
First, the algorithmic accountability movement will gain momentum. The Christchurch Call Initiative on Algorithmic Outcomes has proven that third parties can audit proprietary algorithms while protecting commercial confidentiality. As this approach matures, platforms will face growing pressure to demonstrate that their recommendation systems do not amplify extremist content.
Second, cross-platform cooperation will become both more important and more difficult. As GIFCT’s 2025 forum noted, adversaries are constantly adapting. They migrate between platforms, exploit gaps in detection systems, and leverage encrypted messaging to avoid surveillance. The expansion of hash-sharing databases, incident response frameworks, and research networks is essential but must keep pace with the evolving threat.
Third, regulation will continue to spread globally, with the EU’s approach serving as a template for other jurisdictions. The “Brussels Effect” — the tendency for EU regulations to set global standards because of the size of the EU market — means that the DSA and TCOR will influence platform behavior well beyond Europe’s borders. However, regulatory fragmentation and geopolitical tensions could undermine the goal of a coordinated global response.
Fourth, youth engagement will move from the margins to the center of PVE strategy. The recognition that young people are not just vulnerable targets but also powerful agents of change is transforming how programs are designed and delivered. The most effective initiatives will be those that trust young people to lead rather than merely participate.
Fifth, credible messenger programs involving former extremists, survivors, and community leaders will receive greater institutional support and funding. The evidence base for these approaches is growing, and their authenticity gives them a reach that official government campaigns cannot match.
The challenge is immense. But so is the opportunity. Social media connects billions of people across every border, culture, and language. The same tools that extremists exploit to divide can be wielded to unite, to educate, and to heal. International PVE Day reminds us that prevention is not only possible — it is a responsibility shared by every person who participates in the digital world.
Frequently Asked Questions About International PVE Day and Social Media
When is International PVE Day 2026? International PVE Day falls on February 12, 2026. It is observed annually on this date.
What does PVE stand for? PVE stands for Prevention of Violent Extremism. The full name of the observance is the International Day for the Prevention of Violent Extremism as and when Conducive to Terrorism.
Who established International PVE Day? The United Nations General Assembly established PVE Day through Resolution 77/243.
How does social media help prevent violent extremism? Social media serves as a platform for counter-narrative campaigns, digital literacy education, community-building, and the amplification of credible voices. AI-powered content moderation systems detect and remove terrorist content at scale. Industry coalitions like GIFCT facilitate cross-platform cooperation.
What is the GIFCT? The Global Internet Forum to Counter Terrorism is an industry-led initiative with 39 member companies that works to prevent terrorists and violent extremists from exploiting digital platforms. It maintains a hash-sharing database, an incident response framework, and an academic research network.
What is the Christchurch Call? The Christchurch Call is a voluntary global pledge, launched in 2019 by New Zealand and France, to eliminate terrorist and violent extremist content online while supporting a free, open, and secure internet.
How can I participate in PVE Day? Share prevention resources on social media using hashtags like #PVEDay and #12February. Report extremist content on platforms. Support digital literacy programs in your community. Engage with local organizations working on peace-building and community resilience.




