Deep Dive Into Automated Decision-Making: GDPR’s Protection Is Not Effective

TobiasMJ
12 min readOct 3, 2021

--

Former CEO of Intel, Andrew Grove, has allegedly said that:[1]

high tech runs three-times faster than normal businesses, and the government runs three-times slower than normal businesses. So, we have a nine-times gap.

One area where governmental and EU regulation, will likely fail to keep up with high tech, is in the area of automated decision-making.

As I addressed in my latest post, automated decision-making is a wide and rather fluent concept. On one end of the spectrum, the concept covers personalized pricing on websites and Smart Fridges that are set to automatically buy certain types of foods on a given day of the month. On the other end of the spectrum, it covers complex algorithms that decide students’ grades or evaluate a criminal defendant’s chances of committing new crimes. So yes, we could potentially face a future where algorithms decide if students go to college or for how long criminals go to jail. Purportedly, GDPR puts a stop block to this development with Article 22.

On the surface, it may seem like the protection offered by GDPR is effective in terms of ensuring the fundamental rights of the individual when it comes to automated decision-making (the right to privacy, freedom of expression, the right to a fair trial, etc.). As we dig deeper into a legal analysis of the law, however, it seems to me that the protection offered by GDPR is too vague to effectively protect individuals against the negative consequences of automated decision-making.

Introduction: Article 22 — What Does It Say, What Does it Mean?

The General Data Protection Regulation (GDPR)[2] protects EU-citizens against automated-decision making to some extent. Article 22 (1) says:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

Article 22 (2) lists three situations in which automated decision-making is allowed under GDPR. It is allowed when:

  • necessary to enter into or to perform a contract (Article 22 (2 (a))
  • authorized by Member State law or Union Law (Article 22 (2 (d))
  • based on the data subject’s explicit consent (Article 22 (2 (c))

I go a bit more into depth with these exceptions here.

Whenever the automated processing is based on a contract or the data subject’s explicit consent, Article 22 (3) sets out additional safeguards that the data controllers have to implement. According to Article 22 (3), the data controller should at least ensure the data subject’s right to express their views, obtain human intervention, or contest a decision.

Finally, Article 22 (4) exempts special categories of personal data (Article 9 (1)) from automated decision-making, unless the data controller obtains the data subject’s explicit consent, or the processing is necessary for reasons of substantial public interest.

Rights of the Individual

GDPR grants certain rights to the individual (see also here):

  • The right to be informed when data is processed about them (Article 13 and 14)
  • The right to access any information processed about them (Article 15)
  • The right to rectify data that is inaccurate (Article 16)
  • The right to ask for deletion of their data (Article 17)
  • The right to restrict data for instance when the accuracy of the personal data is contested (Article 18)
  • The right to receive data that they have provided to the controller (Article 20)
  • The right to object to the processing of their data (Article 21)
  • The right to not be subject to automated decision-making (Article 22)

Article 22 (1) can be interpreted either as a prohibition or as a right of the individual to object to automated decision-making.[3] The difference may seem insignificant but is fundamentally important to properly understand the meaning behind Article 22.

If Article 22 (1) is interpreted as a prohibition against automated decision-making, it would imply that the data controller is obligated to not engage in automated decision-making, unless they can show that one of the conditions in Article 22 (1) (a-c) applies. No action is required on behalf of the data subject. Instead, a supervisory authority would be responsible to ensure that the automated decision-making is compliant with GDPR, and then hand out a fine, if necessary.

If Article 22 (1) is interpreted as right of the individual to object to automated decision-making, the individual will carry the burden of proof that one of the conditions in Article 22 (1) (a-c) is not met. The concerned individual will have to take action, which requires both awareness of the existence of automated decision-making and a willingness to intercede.[4].

The wording of Article 22 (1) is ambiguous and could be interpreted either as a general prohibition or as a right of the individual. The Article 29 Working Party (now replaced by The European Data Protection Board (EDPB)) made it clear in their Guidelines on Automated Decision-Making from 2017 that Article 22 (1) is a general prohibition.[5] However, this interpretation is still debated among scholars for rhetorical, historical, and practical reasons. [6] A general prohibition against automated decision-making would have severely limiting consequences for instance on innovation and common practices in the banking and the insurance sector.

I suppose that Member States may interpret Article 22 (1) in either direction for now, until the issue is resolved by the Court of Justice of The European Union.

Linguistic Analysis of Article 22 (1)

Whether a right or a prohibition, Article 22 (1) is restricted in two ways[7]:

1) the decision has to be based solely on automated processing,

2) the decision has to produce “legal” or “similarly significant” effects on the data subject.

If these two requirements are not fulfilled, the data subject has no way of objecting to automated decision-making, at least not directly under GDPR.

“Solely” based on automated processing

An important detail in Article 22 is the phrase “solely”: “The data subject shall have the right not to be subject to a decision based solely on automated processing (..)”.

Whenever humans intervene in the decision-making process, the decision is no longer “solely” based on automated processing. The problem is, if we apply a restrictive interpretation to Article 22 (1), anyone could blindly “rubber stamp” the profiling made by the algorithm. It would take nothing more to argue that a human was included in the decision-making process on behalf of the data controller. However, minimal human intervention without real influence on the outcome of the decision does not legitimize automated decision-making under Article 22 (1).[8]

In practice, very few automated systems that produce significant decisions about individuals function without a “human in the loop”.[9] Rather than fully autonomous systems, they are decision support systems. At this point of time in AI evolution, the interesting question is not whether a human was involved or not, but rather to what extent the human skewed at the algorithmic output in the decision-making process. Which is difficult to prove in a courtroom.

“Legal” or “similarly significant” effects

Article 22 (1) is aimed at automatically processed decisions that have serious consequences for the individual. GDPR does not specify in more detail what is meant by legal or similarly significant effects.

In the Article 29 Working Party’s Guidelines on Automated Individual Decision-Making and Profiling, the following examples are mentioned as decisions producing legal effects[10]:

  • cancellation of a contract
  • entitlement to or denial of a particular social benefit granted by law, such as child or housing benefit
  • refused admission to a country or denial of citizenship.

Decision-making processes that do not affect a person’s legal rights could still be covered by the scope of Article 22 (1) if they similarly significantly affect the individual. According to Recital 71, examples could be:

  • Automatic refusal of an online credit application
  • E-recruiting practices without any human intervention.

This part of Article 22 (1) is uncontroversial in regards to protecting the rights of the individual.

Right to an explanation

Something to notice about the “the 8 fundamental rights” (Article 13–22), as they are sometimes referred to in “GDPR-lingo”; they do not cover a “right to an explanation”.

It follows from Recital 71 that the automated decision-making based on profiling should be subject to suitable safeguards, which should include specific information to the data subject and (..), to obtain an explanation of the decision reached after such assessment (..)”. However, recitals are not legally binding. It would be highly controversial to impose fines on data controllers without a clear, unambiguous legal basis (fines go up to €20 million or 4% of the company’s global revenue, whichever is higher (Article 83 (5)).[11]

Data processors always have to comply with GDPR’s fundamental principles of lawfulness, fairness, and transparency. In this context, although GDPR does not directly provide the data subject with a right to an explanation, the individual has the right to be informed about the processing (Article 13 and 14), and the right to access information (Article 15) when automated decision-making systems are applied by the data controller.

According to Article 13 (2) (f) and 14 (2) (g), the data controller shall provide information about the existence of automated decision-making, including profiling, to the data subject. The controller should “at least in those cases, give meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” (Article 13 (2) (f) and 14 (2) (g)). The data controller has the same duty to provide information when the data subject lodges a request for confirmation as to whether or not personal data concerning them are being processed (Article 15 (1) (h)).

Implications

The underlying algorithms of an automated decision system can be very complex. Machine learning is not wholly transparent, not even to the developers, and it might be very difficult, if not impossible, to explain why the system has made a particular decision or to tell the logic behind it.[12] It, therefore, appears that decision-making systems with a certain level of autonomy interfere with data controllers’ ability to comply with their obligations under Article 13, 14, and 15.[13] Furthermore, if the basis of the decision cannot be explained, the data subject is unable to effectively contest the decision, and the data controller is thus unable to fulfill the minimum requirements to appropriate safeguards under Article 22 (3).

Possibly, the individual’s rights to information and access only guarantee general information around algorithmic systems before a decision has been made, rather than information about how a particular automated decision was generated after the decision has been made.[14]

Generally, the right to information under Article 13 and 14 are relevant before data processing, while the data subject can file an access request according to Article 15 at any time. However, the phrasing of Article 15 (1) (h) seems to be future-oriented. Under Article 15 (1) (h) the data subject shall have access to information about:

“the existence of automated decision-making, (..) meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

It seems that Article 15 (1) (h) is focused on the planned scope of decision-making itself, rather than the circumstances of a specific decision.[15] The future tense in the phrase envisages consequences specifically seems to indicate that the data controller must inform the data subject about the consequences the controller wishes to achieve prior to the automated decision-making.[16]

An important loophole to keep in mind comes from the fact that Article 15 provides the data subject with the right to an explanation of the system’s functionality, but not to the automated processes that merely produce evidence for decision-making.[17] As I have pointed above, these algorithmic processes might not be transparent, not even to the data controller or the developers.

Finally, even access to information about system functionality can be limited by trade secrets and intellectual property rights. GDPR Recital 63 explicitly recognizes that the individual’s rights under GDPR should not adversely affect trade secrets, intellectual property rights, and in particular the copyright protection of software. In line with Recital 63, The Article 29 Working Party states that:[18]

The controller should find simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or disclosure of the full algorithm..”

To what precise extent trade secrets and IP are allowed to “cover-up” the processes behind automated decision-making systems is not clear from the text of GDPR.

Conclusion

In the end, the individual may stand in a situation with no way of properly challenging an algorithmically produced decision.

It is unclear if Article 22 (1) should be considered as a right of the individual or as a prohibition against automated decision-making not covered by the exceptions in Article 22 (2) (a-c). If Article 22 (1) is to be read literally, as a right of the individual, the responsibility to object is entirely on the data subject’s shoulders. If the data subject is not aware of his rights under GDPR or cannot manage to file an access request to the data controller, the automated decision remains unchallenged.

The GDPR does not directly offer a right to an explanation to the data subject. The individual’s rights to access and information are curtailed by the complexity and opacity of modern-day decision-making systems. Very likely, data controllers meet their obligations under GDPR, if they provide general information about system functionality prior to the decision-making process.

Subsequent to the decision-making process, a data controller may not be able to explain the rationale behind a specific decision, simply because the controller has no insights into the algorithmic processes. Furthermore, the data subject’s right to access can be restricted by trade secrets and intellectual property rights of the software which is a legitimate consideration according to GDPR Recital 63.

Under Article 22 (3) the data subject has the right to obtain human intervention on the part of the controller. But how can the data subject be certain that the controller will not just “rubber-stamp” the decision generated by the algorithm? Data controllers have no obligation to provide a detailed explanation of the rationale or circumstances behind the decision under GDPR.

As the title of this post indicates, I do not believe that the protection of individuals offered by GDPR against automated decision-making is effective. The legislation is too vague to make any real difference to the data subject. Due to the considerable business benefits that lie in automated decision-making, these algorithmic methods of reaching decisions will likely not subside in the coming years. To the contrary, they will likely become more widespread and common. The implications of this are important to be aware of. There is indeed a risk that algorithmic decision-making systems to an increasing degree will undermine the rule of law until new regulations or further guidelines are laid out by governments or the EU.

*************************

[1] https://www.washingtonpost.com/national/on-leadership/googles-eric-schmidt-expounds-on-his-senate-testimony/2011/09/30/gIQAPyVgCL_story.html (26–09–2021).

[2] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

[3] Wachter, S., Mittelstadt, B., & Floridi, L. (2017), Why a right to explanation of automated decision-making does not exist in the general data protection regulation, International Data Privacy Law, 7(2), 76–99.

[4] Ibid. pg. 94.

[5] Article 29 Data Protection Working Party (2017), Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, pg. 12.

[6] see Luca Tosoni (2021), The Right to Object to Automated Individual Decisions: Resolving the Ambiguity of the Article 22 (1) of the General Data Protection Regulation, University of Law Legal Studies, Research Paper Series №2021–07.

[7] Veale and Edwards (2018), Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling, Computer Law & Security Review 34 (2018), pg. 400.

[8] Malgieri & Comand (2017, Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation, International Data Privacy Law, 2017, Vol. 7, №4, pg. 252.

[9] Veale and Edwards (2018), pg. 400.

[10] Article 29 Data Protection Working Party (2017), pg. 21.

[11] Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decisionmaking does not exist in the general data protection regulation. International Data Privacy Law, 7(2), pg. 80.

[12] Stefanie Hänold (2018), Profiling and Automated Decision-Making: Legal Implications and Shortcomings from: Corrales et. al., Robotics, AI and the Future of Law — Perspectives in Law, Business and Innovation, pg. 149.

[13] Ibid., pg. 143

[14] Veale and Edwards (2018), pg. 399.

[15] Watcher et. al (2017), pg. 84.

[16] Ibid.

[17] Ibid. pg. 89.

[18] Article 29 Data Protection Working Party (2017), pg. 25.

--

--

TobiasMJ
TobiasMJ

Written by TobiasMJ

I write articles about tech, business, and life. To follow my work, subscribe here: www.futuristiclawyer.com

No responses yet