Human-Computer Interaction

How should we interact with chatbots?

Date:
Changed on 28/08/2024
How much trust should we place in chatbots? And how can we optimise our interactions with them? As these tools take on unprecedented scope, these questions are becoming crucial. They are the focus of a thesis by Clélie Amiot, a doctoral student under the supervision of François Charoy and Jérôme Dinet in the Coast team, an Inria project team run jointly by the Inria Centre at the University of Lorraine, the CNRS and the University of Lorraine, and attached to the Loria laboratory.
Coast - Illustration applications collaboratives de confiance
© Inria / Photo C. Morel

Human-computer interaction (HCI) from a multidisciplinary perspective

“When I was deciding on my thesis topic, ChatGPT and other similar tools were not yet known to the general public, and we thought we would concentrate on more traditional chatbots that rely on precise, identified knowledge, such as FAQs,” explains Clélie Amiot. “The initial aim was to see how users could be helped to extract as much information as possible. But with the rise of generative chatbots, the issue of the trust we place in the answers they provide has become central, as their sources are more difficult to identify.” 

The doctoral student and her supervisors decided to adapt: the thesis, which began at the end of 2019, did end up focusing on this question, but also on the benefits of using chatbots in very large-scale collaborations, i.e. involving several dozen or even hundreds of participants. It sheds new light through a multidisciplinary analysis of human-computer interaction. “The acceptability of chatbots has already been studied from numerous technocentric approaches and a few anthropocentric perspectives,” notes Jérôme Dinet, lecturer and researcher in ergonomic psychology at the University of Lorraine and co-supervisor of Clélie Amiot’s thesis alongside François Charoy, professor and member of the Coast team. “But here, computer science and ergonomic psychology come together for the first time.”

Overconfidence on the part of humans

The young researcher developed two experiments to implement this innovative approach. The first involved observing how volunteers react to information depending on whether it is provided by a human or a chatbot. Participants recruited from the University of Lorraine mailing list were asked questions about survival techniques in the forest, for example, and received help from either a human or a chatbot. But there was a subtle catch. What they believed to be a chatbot was in fact controlled by a human, who was therefore providing the same answers as in the first situation. The reactions, however, were very different. 

“When the assistant is human, the participants think more, tend to argue more and let themselves be influenced less,” notes Clélie Amiot. When they think it is a chatbot, on the other hand, they are more likely to follow its advice and be less critical. They tend to take the information it provides for granted.” These results raise questions, as for the time being, the sources of many of these generative AI systems cannot be accessed or verified, or it is very difficult to do so. 

The solution to this overconfidence is “cognitive forcing”. “For example, the user is asked to give their opinion before the automated answer is provided, which then forces them to compare the two and think things through,” explains Clélie Amiot. “There is also a lot of work to be done on “onboarding”, i.e. during the very first interactions with the chatbot. It can be interesting at that point to see the chatbot fail so that we can calibrate our expectations and be able to recognise its failures, before we get used to relying on it too heavily.”

Smoother collaboration thanks to chatbots

The second experiment was based on the creation of a collaborative platform on which 72 participants were divided into teams of 4 and had to work together to solve 16 modules. “This type of large-scale collaboration, which is typically found in the corporate world, is extremely difficult for humans to manage: you have to memorise what each person brings to the conversation, and you have to be available all the time because of the different schedules of the various participants,” explains Clélie Amiot. “A chatbot, on the other hand, does not encounter these difficulties; it has no memory or timetable limitations and can even translate conversations if the participants speak different languages.” 

In the experiment, some teams benefited from the help of a chatbot capable of relaying information already communicated by the participants. “And in these teams, the conversations were much more fluid and dynamic than in those without a chatbot,” emphasises Clélie Amiot. This should encourage the integration of chatbots into large-scale collaboration, since even a very simple chatbot improves the quality of exchanges. “With a more complex version, capable of accessing company documents on its own, for example, the benefits could be even greater”, says the doctoral student. 

Towards chatbots tailored to users’ needs

Again, this is on the condition that you can control the source of the information used by the chatbot. “The natural language capabilities of a chatbot like ChatGPT are impressive, but the next step will be to use them in a more managed way, linking them to reliable databases,” continues Clélie Amiot. “The aim is to bring together the best of two worlds: symbolic AI, which is based on explicit declarative rules and knowledge, and deep learning, which is based on statistical knowledge.” 

In the meantime, Clélie Amiot’s research provides a new perspective on the use of chatbots. “Chatbots, like all technological innovations, tend to give rise to polarised opinions, even among non-users,” notes Jérôme Dinet. “Clélie’s research has provided us with real scientific arguments to contribute to the debate. The aim is to convince chatbot designers and developers to take greater account of users’ fears, attitudes and expectations from the outset, so as to offer tools that will be readily accepted.” It is a way of restoring the human element to its rightful place.

Cognitive science at the service of social video games

Since 2022, Clélie Amiot has been working alongside Nicolas Gauville and Jimmy Etienne, two Doctors of Computer Science who founded the Cats & Foxes start-up. Supported by Inria Startup Studio, which provides them with material and human resources and enables them to forge links with various research teams, they are developing a video game called VirtualSociety. The aim is to create virtual worlds that emphasise caring and social relationships. Clélie Amiot is contributing her expertise in user interaction. “Gaming is interesting for research because it allows us to observe relationships within virtual worlds, but it also represents an opportunity in the field of health, for example through virtual reality therapies in the treatment of certain phobias or autistic disorders”, she points out. She plans to officially join the start-up’s scientific team once she has completed her thesis.

The expertise of the Coast team

User behaviour is constantly evolving as we become accustomed to new services and new ways of working together. However, simply improving existing solutions to meet these challenges is not enough. The team’s research addresses the problems arising from the evolution of contemporary technologies and those that can be anticipated, exploring three directions: large-scale collaborative data management, the composition of data-centric services and, above all, a foundation for developing reliable collaborative systems.