Artificial intelligence

Improving Machine Learning Models with Time

Date:

Changed on 24/02/2026

A new project team of the Inria Center at Rennes University, France, MALT designs methods that overcome the problems related to learning machine learning models on temporal data as well as those posed by the adaptation of models to the evolution of data over time. Scientists also study the trustworthiness of these temporal models, with implications in terms of robustness, efficiency, explainability, privacy and fairness.
© Alline De Paula Reis

AI models translate, write, predict, compose music, generate images, answer questions and amaze us in a thousand other ways. Yet, in Machine Learning, algorithms are still struggling with most contexts that include time constraints. “Temporal data is fraught with problems, sums-up Élisa Fromont, head of the  Malt project team[1]. It is often multivariate, riddled with multiple dependencies between variables and along the time.”

 One of the typical time dependency is seasonality

If I am to predict electricity consumption, I must understand that we are in summer time, not in winter, that people work outside during the day and come home in the evening, that there is a public holiday or a soccer match.” A whole bunch of intertwined events conspires to make the data highly fluctuating and hard to anticipate.

Extracting Valuable Information

With most Machine Learning models not handling temporal data very well, specialized algorithms have emerged. An alternative to such specialized models would consist in preprocessing temporal data prior to sending it to the usual ML tools. In other words, finding a representation that enables to extract the valuable information in such a way that it would be intelligible to run-of-the-mill models. In scientific parlance, this is called data embedding.

Image

Portrait Elisa Fromont 2024 - Malt - Rennes

Verbatim

There is a lot of stake in this topic because once learned a good data representation, every tasks, including data generation, become easy to tackle. However, the peculiarities of —multivariate— temporal data mentioned before make this embedding task especially challenging.

Auteur

Elisa Fromont

Poste

Enseignante-chercheuse à l'Université de Rennes, responsable de l'équipe Malt

Foundation Models Hardly Handle Time

To tackle this embedding problem, IA big players, such as Google with  TimesFM , intent to come up with a foundation model  [2] that could easily handle time series. “But it doesn’t generalize that well to every type of data and every downstream tasks because it is faced with too many diversities in the temporal series. One ends up having to work on her own data specificity.” 

Data generation is one such challenging task especially when dealing with multivariate signals. Yet, there is no shortage of industrial applications. “For protecting privacy, for instance, one might want to use synthetic data instead of real personal data.”

Early prediction on temporal data is also very much of interest. “In this setting, we consider that data arrives continuously in a stream. The goal is to analyze and make a decision on this data as soon as possible. This usually saves time, money and possibly lives in the medical domain. Sometimes, there is some uncertainty on the prediction and one would rather wait a bit. So it’ is a matter of trade-off between precocity and accuracy on the prediction. Evaluating this —true— uncertainty in the prediction is an important topic shared among many Machine Learning researchers.”

Learning Over TIme

A second research axis of the team focuses on learning over time. “Take a factory where some AI model has been doing real-time video analysis for, say, the last three years from some particular camera sensor. And all of a sudden, they change the camera. As a result, pictures are not exactly the same anymore as the ones used for initial training. There is a change in data distribution. So we want to transform the model or the data without necessarily having to start the training from scratch all over again, which could be too costly or impossible if the original data are not available anymore. We can 're-calibrate' once. That is called a domain adaptation. Or we can also 'recalibrate' continuously through what is called continual learning. In this latter case, we want the model to keep on learning without forgetting what it has learned on past data. We may add new tasks: new recognition classes, new types of data or mix of both. In our team, we are working on methods for doing that.”

Another option is sequential learning. “The model doesn’t have all the data at once. It doesn’t learn everything in one go. It interacts with an environment and obtains rewards for actions, and, overtime, it adapts its interaction strategy in order to maximize these rewards.” One type of algorithms for doing that is called Multi-Armed Bandits. “Note that this topic is of interest to  MALT, but also at the heart of the  Scool, Inria project team in the city of Lille.”

The third research axis evolves around the trustworthiness of temporal models presented before with several questions popping up. One is explainability. “Models are often so complex that we don’t know what they rely on when they make a prediction. So we can hardly trust them. If trust is to be restored, one must be able to explain what is the base for the decision.”

Fairness and Privacy

Fairness and Privacy iare also of concern to increase trust, as data and then models might be biased or could leak important private personal information about individuals. “Take medical imaging for instance. Some models work better for some population than others because there is less data about the latter, can we mitigate this? A fair model is trained on a dataset. If I change the data over time, will the prediction remain fair?”

In addition, one must also ascertain the algorithm’s robustness. “A model is trained on a dataset. If I change the data over time, will the prediction remain as accurate? Are there specificities to temporal models regarding robustness?”

Efficiency is “another concern to increase trust, at least on the societal level. Doing more with less: smaller models, smarter training strategies, compression, embedded systems, etc.”

Une forte coloration industrielle

The team has a strong industrial dimension. “It runs the gamut from medical imaging to telecommunications with partners such as Orange, Siemens or Stellantis.” Out of 12 PhD students currently in the team, 5 are financed through  Cifre, a French programme through which companies get involved in academic research.

Lastly, is worth mentioning the presence in MALT of a scientist from the French Defense Artificial Intelligence Agency  (Amiad). “She is interested in methods for analyzing temporal graphs and detecting anomalies in them,” a topic known to have ramifications in cybersecurity.


[1] Malt  MALT is a project team of Inria and Rennes University, common to UMR Irisa. Permanent members are: Élisa Fromont, Patrick Bouthemy, Romaric Gaudel, Simon Malinowski, Romain Tavenard, Paul Viallard and Barbara Pilastre.on Malinowski, Romain Tavenard, Paul Viallard et Barbara Pilastre.

[2] A foundation model  is a model trained on vast datasets so that it can be applied across a wide range of use cases, including cases for which it wasn’t trained.