Data Sciences

Oana Goga: from disinformation to transparency, science as a force for democracy

Date:

Changed on 12/06/2025

As a specialist in the risks associated with online platforms, Oana Goga was one of the first researchers to document the risks of social networks, such as those related to targeted political advertising, disinformation and privacy. She chose to work on subjects that were still considered marginal in the early 2010s. In this interview, she looks back at her bold scientific career marked by changes of direction and her determination to use science to inform political decisions.
Portrait Oana Goga avec le titre "Elles font le numérique"
© Inria / Photo B. Fourrier

How did you map out your career as a researcher? Has computer science always been the obvious choice for you?

As soon as I started secondary school, I knew that I wanted to work on the Internet. I've always been fascinated by technological innovation on the Internet and I wanted to be part of it. In my home country – Romania – research was often seen as a “default choice”, reserved for those who failed to obtain good jobs as engineers. In France, I discovered a different reality. 

I first came to France on an Erasmus-type internship, and then as a research engineer at Inria and ENS in Lyon. That's when I realised that researchers were the people carrying out the research. So I rethought my career path and decided to study for a PhD. It wasn't easy. Nobody wanted to take me on as a PhD student at first. So I did another Master's, got excellent marks and that opened doors for me. 

What did you learn from your years as a PhD student? 

In 2009, I started studying for a PhD on the measurement of Internet speed. After a few months, however, I soon got the impression that I hadn't chosen the right subject and that I was witnessing the end of an era: the great technical advances had already been made for the Internet. At the same time, Google, Facebook and the social networks were expanding, and I could see new risks emerging, particularly in terms of privacy. 

I therefore decided, not without some trepidation, on a radical change of subject for my PhD in order to focus on the risks posed by social networks. This was not considered a “serious” subject at that time. There were times when two of my papers were rejected 11 times; that was very hard to take. But I also met some key people, like Krishna P. Gummadi, Director of the Max Planck Institute in Germany, who believed in my work. He was the one who gave me the tools and the confidence to go further.

That's one of the lessons of my career: the environment and the people you meet can change everything. All it takes is for a renowned researcher to say to you: ‘You are capable,’ and everything becomes possible.

I always wanted to go quite far in my research; I always wanted to be a very good researcher and I aspired to be published at the best conferences, until Krishna P. Gummadi said this to me: ‘What you should aim for is not to be published at the best conferences, but to have the best papers at the best conferences.’ This was a revelation. That’s the mindset I still work with today.

A brief biography

  • 2010-2014 PhD in Computer Science at Sorbonne University (formerly Pierre and Marie Curie University)
  • 2014-2017 Post-doc at Max Planck Institute for Software Systems in Germany
  • 2017-2024 CNRS research fellow in the SLIDE team at LIB (Grenoble Computer Science Laboratory), and then in the CEDAR team at LIX (Ecole Polytechnique Computer Science Laboratory)
  • 2022 Winner of an ERC “Starting Grant" for the MOMENTOUS project
  • 2023 Winner of the Lovelace-Babbage Prize from the French Academy of Sciences and the French Computer Science Society (SIF)
  • 2023-2024 External expert for the European Commission
  • 2024 CNRS bronze medal winner
  • 2024 Inria Senior Researcher in the CEDAR team at LIX (Ecole Polytechnique Computer Science Laboratory)

Have you run into any obstacles as a woman in research?

For a long time I considered myself lucky. During my early years, I didn't experience any particular difficulties. It was a period when work was very individual and heavily reliant on your supervisor. If you had a good supervisor, then everything would be fine and you would make progress. That was the case for me.

But strangely enough, things became more complicated once I was in post and started having to interact with more people. I wasn't always taken seriously, particularly when I pointed out administrative obstacles that were preventing me from doing my research. And the problems were not always posed by men, either.

It has simply taken me some time, by asserting myself as a researcher, to learn how to set limits. As you gain in scientific confidence, you have less doubt about the legitimacy of your needs. You learn to be more direct, to impose your authority.

In France, there are many initiatives to help women scientists, for recruitment to institutes or to raise the profile of women, for example. I think that I've benefited from the work of previous generations.

You are now a recognised expert on digital risks, working with institutions such as the European Commission. What issues are you currently researching? 

I’m studying the risks associated with online platforms, including disinformation, privacy, political advertising and impacts on mental health. I try to provide concrete, rigorous data on these phenomena. These are truly central issues in today's society, but at the start of my career, there were only three of us studying social networks. 

I was one of the first people to take an interest in online political advertising, even before the Cambridge Analytica affair. I developed a collaborative tool to collect examples of targeted advertising and understand why they were aimed at certain profiles. The aim was to provide greater transparency. A lot of my work was carried out fairly early in the debates, at a time when legislators needed to make decisions quickly. I was heard by the European Commission during the drafting of the Digital Services Act. Through our publications, I called for the enshrinement of transparency obligations in law, which is now the case.

We have also shown that the very definition of “political advertising” is problematic: people’s perceptions differ in 50% of the cases, depending on the content or context. The computational implementation is therefore very complex. When it comes to advertising on issues of general interest to society, such as abortion or immigration, for example, there may be a fine line between political and humanitarian advertising. You cannot simply impose bans or restrictions without running the risk of harming civil society as well. However, we can ask for greater transparency.

Data donation: a new civic duty?

Many measurement methodologies and scientific experiments are based on individual data donations and civic participation. This is often the only way we can carry out external audits of democracy, without being dependent on platforms and what they give us. I would like to see major awareness-raising campaigns on this subject. In the meantime, for anyone who is interested, I am setting up a European panel to monitor disinformation. Everyone can join our mailing list to receive information.

What major scientific challenges are you trying to tackle today?

We have two main focuses. Firstly, detecting coordinated online influence campaigns. The idea is not to state whether an item of information is true or false, but to analyse the global dynamics: if organised groups are disseminating a political agenda through different channels, this can be detected. This is sensitive but essential work.

The second focus aims to understand and measure whether advertising algorithms exploit cognitive biases in a predictive manner. I'm working with economists to develop a rigorous measurement methodology. We recruit volunteers, identify their biases in tests, and then see whether the algorithms spot these biases and exploit them in their advertising. For example, our methods include creating competition between advertisements to see which ones are chosen by algorithms. The initial results show that, yes, algorithms can exploit these biases in a predictive manner. 

More broadly, you must also pick your battles. You can't study everything. In the Cedar team to which I belong, four PhD students are working with me and I’m also working with the following researchers: Ioana Manoslescu and Oana Balalau. I have decided to focus on Facebook and YouTube because they are the most influential platforms, even if the data is more complex to obtain. We have also conducted studies on minors, on disinformation... But today, I think that legislators need to do some of the work. Science can alert, document, measure and propose. But at some point, the rules have to change.