„Don’t be so emotional“
Affective Computing under the AI Act and the GDPR
1. Introduction
“When dealing with people, remember you are not dealing with creatures of logic, but with creatures of emotion.”
Dale Carnegie
The human race has long been fascinated by the role of emotions in our lives. Starting from Aristotle, over 19th century thinkers, such as Charles Darwin and William James, and the scientists in the 1960s, that first imagined machines recognizing human faces, all the way to emotion recognition technologies we discuss today. Over the centuries, our understanding of these “complex experiences of consciousness, bodily sensation, and behaviour” has grown together with the awareness of how these can be manipulated. And now enter artificial intelligence.
Artificial intelligence has given us the possibility to both recognize emotional states and manipulate (not necessarily malevolently) them en mass. Yet, despite centuries of research on emotions and the effect they have on human beings, now coupled with prospects that automation has brought to the fore, our laws oftentimes fall short of granting individuals sufficient protection.
The most recent European legal instrument, the AI Act, ambitiously attempts to rectify this. To assess whether the Act is fit for this task, we will first briefly present the field of affective computing and the questions it raises. Then we will explore the existing pitfalls in protecting individuals against these systems, especially considering the protective framework of the GDPR. Finally, we will illustrate how the AI Act aims to cover the existing gaps as well as what the potential legal implications may be.
2. Affective computing in the legal context
Affective computing was first introduced in a self-published paper by Rosalin W. Picard in 1995, because the technology described in it was disregarded as ‘science fiction’. Fast-forward two decades and her once radical claim that “computers [will be able to] read emotions (i.e., infer hidden emotional states based on psychological and behavioral observations)” is not so radical anymore. Quite to the contrary, it underlies a fair number of business models. Many providing tailored, emotion-based marketing and business strategies. Other making our environment more personalized and intuitive, such as smart home devices and cars.
As a result, the once marginal and confusing term now has a firm standing in the scientific literature. One discipline lagging behind, however, is the law.
Namely, besides from obvious cases of manipulation, many affective computing products have much more subtle effects, which will probably not be enough to trigger consumer protection and anti-fraud regulation. What we are then left with is the GDPR, since many of these technologies rely on algorithms extracting patterns from “a variety of measurements including facial expressions, speech, gait patterns and other metrics,” all of which can be considered personal data. However, the GDRP is no longer alone on the frontlines. One other instrument that will have a role to play is the AI Act, that explicitly regulates emotion recognition systems, in general, as well as the use of “polygraphs and similar tools”, in particular. One thing that remains unclear is why the Act would not simply regulate affective computing as the term is well established and its scope is broader. Another thing that remains unclear is why the Act would regulate “polygraphs and similar tools” separately. Namely, polygraphs work by measuring “emotional responses to a series of questions” to calculate a deception score, while the AI Act defines emotion recognition systems as “AI systems [used] for the purpose of identifying or inferring emotions or intentions of natural persons” (Article 3(34)), which then clearly covers polygraphs. Furthermore, the AI Act not only confusingly separates the two, but by explicitly mentioning polygraphs it also to an extent acknowledges this pseudoscientific technology. How-ever, to avoid us getting too deep into how things could or should have been, we are instead going to observe them as they are. Therefore, in the next Chapter we will briefly consider the legal implications of the GDPR and the AI Act respectively. As well as uncover some of the persisting gaps in the regulatory framework.
3. Legal assessment
a. GDPR
We have already noted that the goal of affective computing, and emotion recognition as its subset, is to measure physical responses to infer emotional states. It is important to note that although these technologies cannot accurately assess subjective experience, they can estimate emotional responses and general attitude towards external stimuli, for instance, concluding that one movie, song or ad, on average, elicits a more positive reaction than another. And this is enough for them to influence our behavior.
Two important things to separate here are the input to the system and its output. To first focus on the output, emotion recognition systems will usually generate a prediction of a person’s mood, general attitude or even specific emotion. These outputs might not necessarily be personal, as they are universal and cannot be used to identify a person. Unless, we consider that they are inferences made based on personal data, which might make them just as personal as the original data. The number of arguments for this interpretation is steadily increasing and the game works in both directions. So, for instance, the CJEU decided in Vyriausioji tarnybinės etikos komisija that when dealing with inferences that can be classified as sensitive (e.g. our neural states and reactions can uncover facts about our health, but also our political or other beliefs), the data used to make these inferences respectively deserves the same treatment. Even if it would not be considered sensitive data otherwise. Another way of protecting inferred emotional states relies upon the fact that through the combination of various elements it is still possible to identify the individual (e.g. by linking inferred emotional states and the user profile or the data points used to make the inference), which makes the data personal. So, it is fairly reasonable to conclude that emotions, as inferences made based on sensitive, personal data, are to be classified as personal, if they can in any way and by any whom be traced back to an individual. The question whether the inferred data could be classified as sensitive is admittedly more complex and we will not consider it further at this stage.
On the other hand, in order to make the inference the system will have to process personal data (e.g. facial expressions, gait, speech, etc.), some of which qualifying as special category data. This qualification depends on whether e.g. facial expressions are processed through specific means allowing unique identification of a natural person. This qualifying criterion would then per analogiam also apply to other (potentially) biometric data, such as speech. Therefore, whether the processed data is special category data will have to be assessed on a case-by-case basis, with the AI Act now strengthening the conclusion that it is, by defining emotion recognition systems as systems making inferences based on biometric data. Very important to stress here is that, despite certain actors wanting to apply the same criterion for determining whether the processed data is personal, this argument can never hold ground. The GDPR defines personal data as any information relating to an identified or identifiable natural person. Therefore, it is irrelevant whether a developer of an emotion recognition system also identifies the person, it is only relevant whether he can theoretically do so. Yet, despite this being the case, many of these technologies and their providers fail to comply with the GDPR’s obligations, especially those concerning lawfulness and transparency. This means that the data subjects are exposed to these systems, often being completely unaware of the fact.
Therefore, we can conclude that even though the GDPR theoretically provides protection to affected persons, it fails to do so in an efficient manner. But all is not yet lost, maybe the AI Act will do something to rectify the situation and provide the much-needed protection.
b. AI Act
The AI Act deals with the question of affective computing, regulating it in at least three specific instances: emotion recognition systems in education and at work, emotion recognition systems in all other contexts, and “polygraphs and similar tools” used in law enforcement, migration, asylum and border management. Consequently, there are certain obligations that apply to these three categories respectively.
The first recon with emotion recognition in education and at work, these systems are in general prohibited (Article 5), unless used for medical or safety reasons. The supposed reason for this prohibition is that “considering the imbalance of power in the context of work or education, combined with the intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain natural persons or whole groups thereof.” Unfortunately, the Act does not clarify why this power imbalance, intrusive nature and potentially discriminatory effects justify a prohibition only in these two contexts. It also does not clarify what medical and safety reasons might still justify their use. These are all questions that will have to be answered once the Act is in force.
The second category, including the use of “polygraphs and similar tools” in law enforcement, migration, asylum and border control is classified as high-risk. Firstly, as already pointed out, it is completely unclear why the Act describes these systems rather than using the same “emotion recognition” terminology. Secondly, both Recital 38 and 39 acknowledge that the use of these systems in the said contexts is associated with a significant degree of power imbalance due to the vulnerability of the people affected by the technology. And yet this does not seem to justify their prohibition. Thirdly, the Recitals acknowledge possible effects of low-quality training data, lack of transparency and accountability on affected persons, yet the use of these systems in law enforcement is excluded from transparency obligations (Article 52(2)). And the systems used in both mentioned contexts must only be registered in a “secure, non-public section of the EU database” (Article 51(1c)). So not only does the “significant power imbalance” in these contexts not merit prohibition, but it apparently does not merit even the basic requirements of transparency. The only straw to grasp on here is Recital 41, which states that “the fact that an AI system is classified as a high-risk AI system … should not be interpreted as indicating that the use of the system is lawful under other acts of Union law or under national law.” So, at least the member states that forbid the use of such systems, such as for instance Germany where the use of polygraph in court and criminal investigations has been forbidden since 1954, can continue to protect their citizens from their pseudoscientific outputs.
The third category, including emotion recognition systems used in all other context, is also classified as high-risk without any apparent exceptions. Of course, there always remains a possibility for a developer to claim that a particular system is not high-risk as it poses no risk of significant harm to life, health or other fundamental rights (Article 6(2)). However, this will probably be a high threshold to meet considering the power of these technologies and will still have to be reported to the authorities. In the event that the threshold is not met, the AI Act will impose some stringent requirements that will have to be built into the system. These include designing and running risk and quality management systems, including the possibility of human oversight, mandatory registration and documentation obligations, as well as automated logging and record keeping. Finally, Article 52(2) explicitly mandates increased transparency towards the persons affected by these systems as well as honoring the existing data protection regulations. These obligations are by no means trivial, especially considering that many of such systems, including e.g. smart billboards, emotion-tailored content recommendations, etc., often commence processing operations as soon as an individual is in the proximity of the system.
Finally, there is one more instance of somewhat hidden regulation of emotion recognition systems, this time in the context of accessing public services. Namely, Annex III (5)(c), classifies “AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of emergency first response services” as high-risk. Evaluating and classifying emergency calls to establish priority can hardy occur without speech pattern analysis, in much of the same way as today’s biggest companies deploy emotion recognitions speech analysis to customer service calls to determine who should have priority. In the emergency call context, however, it is disturbingly easy to see how this can lead to unjust outcomes. For example, some criteria for call redistribution include recognized states of fear and anxiety, that may lead to the classification of calls as more urgent. What this can lead to is that a caller experiencing a more urgent situation but sounding calmer can have increased waiting time over a person in a less urgent situation but prone to panicking. The only thing making the situation even worse is the brain gymnastics exercise we need to conduct to reach this conclusion and consider the implications of this provision.
4. Conclusion
The AI Act is far from being the magical solution to all the questions and legal loopholes that the GDPR left in the regulation of emotion recognition systems. Furthermore, the Act still leaves wide gaps, especially when the deployment of these technologies by public authorities and the police are concerned. However, when it comes to commercially deployed emotion recognition systems the Act is bound to close at least some of the existing legal gaps.
Finally, even though some of the obligations under the Act are a force to be reckoned with and will have to be considered from the very start of the developmental process, in the end, the protection achieved will always come down to enforcement. Due to the inherently invisible character of these systems, enforcing the introduced obligations will be a challenge in its own right. We can only hope that the market surveillance authorities tasked with monitoring the adherence to the Acts obligations will be up to the task.