Agentic AI, the next turning point in artificial intelligence
Date:
Changed on 23/03/2026
Agentic AI refers to an evolution in artificial intelligence systems, which are now capable of going beyond simply executing isolated instructions. Rather than limiting themselves to responding to a query or prompt, these systems can pursue a goal autonomously.
When faced with a complex goal, they develop a strategy, plan a sequence of tasks and execute them in a coherent manner. They also have the ability to use tools and interact with their environment.
The areas of application are numerous and varied, including:
While agentic AI currently relies heavily on generative models (particularly LLMs), it differs from them in that it is more general and action-oriented, going beyond the simple generation of text, images or code.
Agentic AI represents less of a fundamental scientific breakthrough than a functional and systemic breakthrough.
Indeed, agentic AI allows users to request the completion of ‘high-level’ tasks without having to specify more or less complex steps or sub-steps. It is therefore a new functional stage in AI.
For example, instead of asking ‘write an email’, the user can now say ‘manage my emails this week’.
It is this notion of autonomy in action (and not just access to information) that characterises agentic AI.
In the scientific world, there are already some very recent tools that aim to carry out highly complex tasks from start to finish: research tracking, programming, formatting results, and writing in conference format.
In this regard, a significant breakthrough was made last April with the AI Scientist v2 tool.
The consequences for the scientific world are therefore already apparent with the proliferation of automated publications, but still uncertain in terms of the most central aspects of research: the actual increase in scientific knowledge.
Agentic AI is transforming research ethics by introducing systems capable of acting autonomously. It raises major issues of responsibility, transparency and consent, as well as difficulties in terms of traceability and scientific reproducibility.
By incorporating implicit values and objectives, these agents can amplify bias and misuse. It therefore requires a rethinking of ethical governance in order to maintain human control over delegated scientific action.
The societal impact is potentially significant and probably already underway, as many professional tasks can now be performed by AI.
The decline in the recruitment of junior programmers is a prime example of this. However, the lack of hindsight does not yet allow us to decide between directly replacing certain jobs or transforming professions with the emergence of AI-ready profiles.
Beyond employment, the challenges posed by agentic AI are identical to those of generative AI, but amplified due to the greater autonomy of agentic AI: privacy protection, bias and fairness, environmental impact, ethics, system security, etc.
Numerous Inria project teams are involved in research into agent-based AI and its applications, as it covers a wide range of topics.
The the research areas concerned include game theory, distributed systems, LLMs (either as agents or as orchestrators), databases and their links with AI agents, communication standardisation, interactions between robots and/or humans (in particular the psychological and sociological dimension of organisations, as well as dynamic aspects), energy efficiency, and the evaluation and security of AI agents.
The fields of application are equally varied, including autonomous vehicles, software development, healthcare, and Industry 4.0/5.0.