Frédérique Segond: “Boosted by the Internet, modern warfare involves the manipulation of information.”
Date:
Changed on 15/09/2025
We see it every day: war is changing. We now talk about “cyberwar,” hybrid war, asymmetric war, and cognitive war. The term “cognitive warfare” was introduced in 2017 by Vincent R. Stewart, director of the Defense Intelligence Agency, when he explained that modern warfare had become a cognitive battle. For him, 21st-century warfare consists of manipulating the decision-making space through a struggle for information.
What is new about cyberwarfare, and more specifically cognitive warfare, is that it fully involves civilian populations in the war. Whereas war used to be a military affair, with the informational aspect in cyberspace, it now includes civilian populations both as victims and as soldiers.
As we know, artificial intelligence is revolutionizing many fields, and cybersecurity is no exception. Thanks to AI, we are able to detect anomalies in real time, predict attacks, and automate responses to ever-increasing threats. AI has therefore become a valuable tool for strengthening the security of IT systems, and many projects today combine cybersecurity and artificial intelligence. This is the focus of numerous research projects involving Inria and its partners, at the intersection of AI and cybersecurity.
The downside is that AI is also used by cybercriminals to automate and make their attacks even more sophisticated and difficult to detect.
Furthermore, the use of artificial intelligence involves the collection and processing of large amounts of personal data, raising crucial questions about privacy protection.
That is the challenge: to make AI a lever for protection, without its use becoming a new factor of vulnerability.
While cyberattacks have long been thought of as purely technical problems, another dimension that has been underestimated until now deserves our full attention: the human dimension.
AI enables cybercriminals to design more targeted, faster attacks, and also allows them to devise more convincing attacks by playing on human psychology, particularly by exploiting cognitive biases.
Cognitive biases are patterns of thinking that influence our judgments in predictable ways that are not consistent with logic or probability. They constantly come into play in the decisions we make every day. In the digital context, they become psychological vulnerabilities that attackers know how to exploit with formidable precision. Examples include authority bias (a message claiming to come from a superior), urgency bias (“Hurry up!”, “Your account will be blocked...”), scarcity bias (“Last chance to win...”) and familiarity bias (a personalized email that seems “close”).
Once again, data is obviously at the heart of the matter. These manipulations are made possible by the data we leave behind everywhere: on social media, in forms, in our clicks. In other words, cyberattackers can use our identity and digital footprint to design personalized messages. AI makes these manipulations even more realistic, automatic, and widespread.
This raises major issues. Not only are our computer systems under threat, but our personal data, our attention, and even our behavior have become targets. These latter aspects are central to the manipulation of information in cyberspace.
It is no longer enough to secure networks: of course, false content must be identified and its mode of dissemination detected, but it is also necessary to empower people, giving them the means to recognize attempts at manipulation and to adopt critical reflexes. This means training people to recognize traps in order to develop resistance, particularly through training in cognitive biases (to learn to recognize the mechanisms that attacks exploit), simulation games or phishing games (putting users in real-life situations to develop good reflexes), or even analyzing real cases (confronting the reality of the threat improves vigilance).
The future of cybersecurity is not only played out in lines of code, but also in our minds. The best defense is therefore an alliance between technology, ethics, education, and human awareness. A solution requires close collaboration between researchers, businesses, governments, and citizens, as well as a clear and evolving legal framework.
First, we launched a working group on cyber influence operations with the Ministry of the Armed Forces, the Cyber Campus, and the CNRS, with the aim of strengthening ties between civilian, military, academic, and industrial actors.
The subject of information manipulation has long been addressed by numerous Inria projects with our partner universities and the CNRS, whether in terms of content analysis or generation, or in terms of propagation and impact.
It is also to take into account the “human dimension” that Inria has made openness to the social sciences and humanities a priority of its scientific policy in our 2024-2028 strategy, in close collaboration with our partners. We have therefore recently opened positions in the social sciences and humanities to work on this subject within our project teams. This is a new step for us and one of the motivations, beyond the impact of AI, is also to work with all the weapons at our disposal on information warfare and the manipulation of information, obviously with our partners.
Because we are also convinced that resistance requires awareness, which can be achieved through serious games, we have developed, in close collaboration with the Ministry of the Armed Forces, IntelLab, an immersion platform focusing on intelligence issues and cyber influence operations, with the aim of supporting national resistance, for example.
To work on this multifaceted issue of combating disinformation, it is obviously necessary to work within an ecosystem. That is why, in addition to our interactions with the Ministry of the Armed Forces, we have established relationships with public media outlets. And a structuring framework for Inria is the strategic partnership we have signed with Viginum, a national service reporting to the Prime Minister, whose objective is to help detect the manipulation of information originating from foreign countries.
It is on the basis of all these elements that Bruno Sportisse announced, at the last meeting of the partners committee of the Digital Programs Agency led by Inria, the creation of a program dedicated to combating information manipulation within the Programs Agency, under the SGPI. The launch of such a program is explicitly part of our COMP 2024-2028, and we will be careful to involve our partners and stakeholders in this area.