Could neuromorphic computing be a new solution to optimization problems?
Date:
Changed on 12/01/2026
Since they emerged after the Second World War, computers have undergone no fundamental changes in structure. Designed by the physicist and mathematician John von Neumann, their architecture performs calculations using certain components, such as processors (central processing units, or CPUs), while storing data in others (memory units).
Information therefore passes frequently between these two components, sometimes creating bottlenecks due to limited bandwidth and high energy consumption. In optimization problems involving multiple calculations repeated across large sets of variables, such bottlenecks become a particularly limiting factor.
With rapidly growing demand and new applications, the search for less energy-intensive paradigms in computer science has become a major focus. One promising line of research aims specifically to reduce the separation between memory and computation. Neuromorphic computing is one such paradigm. Within the Bonus project team, joint between the University of Lille and Inria, researchers El-Ghazali Talbi and Jorge Mario Cruz-Duarte are working on its implementation.
Neuromorphic computing arose from the observation that the human brain consumes very little energy, requiring only around 20 watts of power. By comparison, a single GPU (graphics processing unit, used in graphics cards) currently consumes about 300 watts, while the most recent supercomputers require hundreds of megawatts. It was in the 1980s that the idea emerged to take inspiration from the neurons of the human brain to develop neuromorphic computing.
Spiking neural networks (SNNs) provide the main building blocks for this new computer science technology.
Image
Verbatim
Unlike artificial neural networks, which are widely used for deep learning and generative AI, an SNN models the temporal dynamics of the spikes observed in real neurons.
Auteur
Poste
Researcher, Bonus project team
In the brain, when a neuron receives signals via the synapses (which connect neurons), it accumulates an electrical charge. Once a certain threshold is reached, the neuron emits an electrical spike that influences other neurons at their synapses. Rather than relying solely on binary signals, SNNs use the amplitude, shape, timing and other characteristics of these spikes to encode information. “This temporal coding is one of the reasons why neuromorphic systems can operate efficiently and selectively,” explains El-Ghazali Talbi.
The versatility of SNNs has given rise to a multitude of models, ranging from biologically plausible ones (for example, the Hodgkin-Huxley model) to those suitable for practical implementation (for example, the Izhikevich model or the Leaky Integrate-and-Fire model).
Neuromorphic architectures hold considerable promise. Their energy performance is enhanced by the absence of a separation between computation and memory, a feature inherent to the functioning of neurons and synapses. Furthermore, these networks operate asynchronously, neurons responding only when stimulated by signals from their neighbours, unlike artificial neural networks, where signals propagate layer by layer, activating all neurons.
This phenomenon enables parallel computation, as a neuron or group of neurons can perform tasks and communicate with its neighbours only when necessary. For optimization problems, this means that different parts of a candidate solution can be explored and updated simultaneously, without global synchronization.
Because they are so different from traditional architectures, neuromorphic systems have not yet been deployed at large scale and their design poses significant challenges. However, work has already been carried out in the fields of machine learning and neuroscience. El-Ghazali Talbi and Jorge Mario Cruz-Duarte aim to apply these systems to large-scale optimization problems, drawing in particular on the formalism of metaheuristics.
What might be the nature of a large-scale optimization problem? A good example would be the management of an entire fleet of city buses. The goal is to find solutions that minimise a given function (cost, travel time, etc.). In very complex cases, it is practically impossible to find an exact solution. One approach is therefore to proceed step by step.
In computer science, a heuristic is an operation that allows a problem to be solved without necessarily finding the optimal solution. Imagine arranging your groceries in the fridge while trying to minimise the space they occupy. A heuristic might involve, for instance, starting with all the small items. But you may need to adjust your approach as you go, moving on to larger items to improve the overall arrangement.
Image
Verbatim
Metaheuristics act as higher-level strategies that decide when and how individual heuristics should be applied.
Auteur
Poste
Researcher, Bonus project team
In the case of a neuromorphic algorithm, this involves selecting a potential solution to the problem (or a set of solutions), slightly modifying it with a heuristic or an operation, choosing the new solution if it is better, adjusting the heuristic if necessary, and then repeating until the best possible solution (close to the optimum) is reached.
Currently, neuromorphic heuristic optimisation tools exist mainly for specific problems, and only a few are applicable across different cases. El-Ghazali Talbi and Jorge Mario Cruz-Duarte therefore decided to develop NeurOptimiser, a general neuromorphic computing framework, intended in particular for continuous optimization problems (in which variables can take real values).
This framework is built in a distributed manner around what the researchers call neuromorphic heuristic units (NHUs). Comprising several neurons, each NHU encodes, modifies, evaluates and communicates with other units. For example, an NHU can encode a candidate solution, apply a perturbation using its internal neurons and then send the modified version to neighbouring units.
The NeurOptimiser architecture was designed using Intel’s open-source LAVA NC environment and is specifically intended for their Loihi 2 neuromorphic chip. For their work, the researchers emulated this chip on a computer.
Early experiments using this framework have already shown encouraging results. The system can handle several benchmark problems of low to medium dimension while consuming far fewer resources than traditional simulations. But the ambitions of El-Ghazali Talbi and Jorge Mario Cruz-Duarte do not stop there, as they are planning to extend applications to combinatorial and multi-objective problems, as well as to embedded computing and different neuromorphic chips.