On 1 December 2022, I-Com and FTI Consulting will host the roundtable “Protecting EU consumers in the digital age” to discuss the Commission’s “Digital fairness – fitness check on EU consumer law” initiative.
The fitness check will evaluate three fundamental pieces of EU consumer policy legislation: the Unfair Commercial Practices Directive (UCPD), the Consumer Rights Directive (CRD), and the Unfair Contract Terms Directive (UCTD), to understand whether horizontal consumer law instruments remain adequate for consumer protection online, by looking at issues including dark patterns, personalisation practices, influencer marketing, marketing of virtual items and the addictive use of digital products, among others.
Ahead of the publication of the public consultation on the fitness check and in parallel with the 2nd Annual Digital Consumer Event, this event provides an opportunity for relevant stakeholders to discuss consumer protection in the digital environment.
This short article on AI used in Chatbots (AI Chatbots) is part of a series of publications that FTI Consulting and I-Com are developing to inform the public debate ahead of the event.
The concept of AI Chatbots is a specific use-case of AI as a ‘virtual assistant’ or ‘intelligent personal assistant’, which is a software agent that can perform tasks (see expert.ai, Aisera). This is a key advancement in the tech landscape, used by many businesses because it streamlines communication between humans and machines.
AI Chatbots are software programs that are trained to have human-like conversations using Natural Language Processing (NLP) capabilities (see Drift). With NLP, AI chatbots can interpret written human language, detect the intent of a query that a user inputs and provide the best response. Questions posed via a dialogue box on a website can be answered automatically by a chatbot. AI chatbots help make customer service more efficient, as they learn from each conversation with customers. As it relates to site visitors, they adapt their responses to different situations and provide personalised responses. They are also available 24/7 and are able to respond to customers immediately.
AI chatbots are an example of how automated decision-making systems (ADM systems) are used in our daily lives. According to some stakeholders, the issue is that ADM systems can be opaque; they do not necessarily lead to more objective or neutral decisions, as the results are often unfair and even discriminatory (see Algorithm Watch, June 2022).
There are a whole host of issues when we look into this from a consumer perspective:
- Understanding behavioural patterns means there is a lot of data that the AI chatbot is processing to help identify behavioural patterns that remain undetectable by humans.
- An AI chatbot’s capability to learn and adapt to user preferences and provide personalised input means there is a lot of data being processed about site visitors to create a personalised experience, to deliver customised content. This would have implications under the General Data Protection Regulation with how the data is being processed.
- As chatbots use AI to recognise and understand users’ emotions based on the words entered, a user’s biometric data is usually used for emotional recognition systems.
The start of the EU’s pro-active approach on AI regulation can be traced back to 25 April 2018, when the European Commission presented the Communication “Artificial Intelligence for Europe”. This was followed by the publication of “Ethics Guidelines for Trustworthy AI” in 2019, by the High-Level Expert Group on Artificial Intelligence (AI HLEG) to provide guidance to all stakeholders and set a framework for achieving trustworthy AI. In addition, the EU Commission also published “Shaping Europe’s digital future” and “A European Strategy for Data” in February 2020, as well as the White Paper “Artificial Intelligence: a European Approach to Excellence and Trust,” to create an “ecosystem of excellence” and an “ecosystem of trust” for AI.
Then, in April 2021, the European Commission presented the “AI Package” made up of three documents: the Communication on Fostering a European Approach to Artificial Intelligence, the 2021 update to the Coordinated Plan with Member States and a proposal for an AI Regulation laying down harmonised rules for the EU (AI Act).
The AI Act, whose adoption procedure is still in progress, has declined diversified obligations that follow a risk-based approach, distinguishing between uses of AI that create unacceptable risk, high risk, and low or minimal risk, from which different consequences clearly follow. Specifically, there is a ban on practices that are considered unacceptable because they are contrary to the values of the Union and they violate fundamental rights. For instance, toys with voice assistants that incite or may incite minors to dangerous behaviour; general manipulative practices of minors or the disabled, or those involving the use of subliminal techniques that exploit the unawareness of individuals are all considered as a high-risk practice. However, the regulation deems chatbots themselves to be of limited risk, resulting in a minimum requirement for transparency so that users are aware that they are interacting with a machine.
Finally, on 28 September 2022, the European Commission adopted a proposal to revise the Product Liability Directive anda proposal for a Directive on Artificial Intelligence Liability which introduce several provisions that are beneficial to consumers. For instance, these provisions include the right of a consumer when harmed by an unsafe product imported from third countries, and their compensation. There is also the introduction of a presumption of causation, when the claimant can prove both that there is fault and that there is a causal link with the AI. For businesses, provisions entail the disclosure of evidential information by businesses that a claimant would need to prove their case in court, though this includes a safeguard for the protection of trade secrets.
The use of a chatbot or virtual assistant inevitably involves the processing of users’ personal data. The first issue that arises in relation to chatbots is in complying with the EU’s Data Protection framework, the GDPR, and in particular with Articles 9 and 22.
Machine learning furthermore gives rise to another critical issue: the difficulty of defining the above and all the purposes of the treatment, with certainty. From this perspective, it is fundamental to promote consumer awareness regarding the information referring to the privacy policy of the data controller. Moreover, there is the requirement in the GDPR to collect consent from the consumer – as such the privacy policy must include the possibility of revoking consent and unsubscribing.
Among stakeholder views, the specific issue of chatbots and automated decision-making is not elaborated on as much as other issues, like dark patterns or online scams. However, some stakeholders have expressed their stance on issues relating to the matter (e.g. personalisation, automated decision-making). Within this group, the position of consumer organisations and industry associations still differs.
Consumer organisations argue that the proliferation of AI systems renders the consumer weak, due to the disruption of choice, and as a result, advocate for the introduction of the concept of digital vulnerability into the revision of the UCPD. In contrast, the private sector has been more cautious. Industry associations believe there are regulatory frameworks already in place to deal with the consumer harms associated with forms of automated decision-making and argue that consumers tend to favour personalised offerings, even though they impact their decision-making. In this regard they call for such concerns to be tackled online and offline in order to avoid regulatory fragmentation, and for more clarity on ‘personalisation’ practices.