When the chatbot becomes a witness: how a simple conversation with AI can prove costly

Abstract: The use of artificial intelligence models has become a common practice in the business environment. Few, however, understand that the history of conversations with a chatbot may be obtained during an unannounced inspection and construed as evidence of anti-competitive intent – with consequences that may include fines of up to 10% of turnover. The article addresses, in accessible terms, several practical questions: (i) Can a conversation with a chatbot become evidence before the Romanian Competition Council?; (ii) Why are such discussions not protected by attorney-client confidentiality?; (iii) What types of questions addressed to a chatbot may raise concerns for the authorities?; (iv) What concrete measures can companies take to protect themselves? The topic is particularly relevant in light of the record number of unannounced inspections carried out by the Romanian Competition Council, as reported in its 2025 activity report, and the increasingly widespread adoption of artificial intelligence tools in day-to-day professional practice.
Keywords: chatbot, inspection, Romanian Competition Council, activity report, professional secrecy, data confidentiality, internal policy
An employee who asks a chatbot, “How can we align our prices with our main competitor without being detected?” may be handing the competition authority exactly the evidence it needs. Chat histories with artificial intelligence models can be seized during a dawn raid, and their content may be interpreted as evidence of anti-competitive intent. The consequence? Fines of up to 10 % of the company’s registered turnover.
For example, a court in the United States recently ruled that such conversations are not protected by attorney-client privilege. The mere fact that they were subsequently forwarded to lawyers does not retroactively transform them into privileged communications.
Thus, what may seem like an efficiency tool – turning to AI to generate options in situations involving potential competition law risks – can become a major vulnerability in the context of investigations conducted by the competition authority.
Can a conversation with a chatbot cost a company up to 10 % of its turnover?
The record number of dawn raids carried out by the Romanian Competition Council (RCC), as reflected in its 2025 activity report, signals an intensification of its enforcement activity. Inspection powers are not limited to traditional documents; any information stored or archived in electronic form can be checked and seized, regardless of the medium on which it is stored. An exception applies to documents protected by attorney-client privilege (legal professional privilege).
The use of AI models, now a routine in the course of professional activities, may become the core body of evidence in an investigation conducted by the RCC if the history of such interactions is collected during a dawn raid and analysed in conjunction with e-mails or other materials, potentially resulting in fines of up to 10 % of the undertaking’s registered turnover.
Implications for companies in Romania
The experience of using a chatbot is inherently conversational, creating the impression of a private dialogue carried out away from the watchful eyes of the authorities.
From a legal standpoint, however, the discussion takes place through a third party, under specific contractual terms. In the absence of clear safeguards and well-defined internal policies, these interactions will generally not benefit from legal professional privilege.
The risk is not limited to major market players or company management. From multinational groups to companies with a small market share, any company can be exposed if AI models are used without precautions and without internal guidelines. Even seemingly simple statements such as “What are the risks if we align our prices with our main competitor?” or “How can we avoid detection of a market-sharing agreement?” may, under certain circumstances, constitute evidence in an investigation.
Such interactions may be interpreted as revealing intent, a degree of risk awareness, or even the existence of a strategy, even where the chatbot is used merely to generate documents or rephrase ideas.
What is to be done?
Essentially, the internet remains a public space. From a competition law perspective, a simple test should therefore be applied before using a chatbot: if the information is current or future-oriented, individualised and capable of directly influencing commercial conduct (prices, margins, volumes, customer lists, strategies), it should not be shared in an informal setting. Data with potential competitive impact requires careful analysis, while historical, aggregated or public information is, in principle, less problematic. At the same time, implementing a clear internal policy on data confidentiality in interactions with AI models, supported by regular training programmes for team members, can mitigate these risks.
Georgiana Bădescu, Partener Schoenherr și Asociații SCA
Teodora Burduja, Attorney at Law Schoenherr și Asociații SCA
Mara Nedelcu, Associate Schoenherr & Asociații SCA
