Compliance through Assessing Fundamental Rights: Insights at the Intersections of the European AI Act and the Corporate Sustainability Due Diligence Directive

Sustainable AI future
Sustainable AI future

Compliance through Assessing Fundamental Rights: Insights at the Intersections of the European AI Act and the Corporate Sustainability Due Diligence Directive

 

Introduction

Artificial intelligence systems, including general purpose AI systems (GenAI), have become an ordinary matter. The rampant commodification of generative models in the last year is only one example that has pushed European lawmakers to rush and adopt the European approach to AI regulation, as primarily represented by the adoption of the AI Act. Unlike other sections of the Regulation, such as those outlining prohibited uses of AI systems, which were included since the European Commission’s initial proposal, norms specific to GenAI systems were introduced by the European Parliament in its version published just before the start of the trilogue negotiations in June 2023. With the provisions governing GenAI systems under the AI Act set to take effect twelve months after the Regulation’s enactment, companies have already taken proactive steps to prepare for compliance.

However, the political rush to expand the scope of fundamental rights protection in the AI Act has raised questions about the compliance process. The AI Act is oriented not only to organisational and technical standards but also to protecting fundamental rights. The Fundamental Rights Impact Assessment (FRIA) is only one example of this process requiring deployers of AI systems to approach a different way to balance risks in the digital age. Understanding this evolution is also relevant in respect to the broader normative framework stemming from the adoption of the Directive (EU) 2024/1760 on corporate sustainability due diligence (CS3D). The directive, actually, is meant to address the adverse impacts of corporate activities on human rights (and the environment) by ensuring that EU companies operate responsibly throughout their own operations and entire supply chains, as well. In particular, the Directive – which applies also to private technological sector – imposes on in-scope companies the duty to undertake human rights due diligence in order to identify, assess and manage the risks of human rights violations. Such a due diligence process too encompasses a human rights impact assessment (HRIA) methodology, which companies, included those from the AI sector, have to perform in order to identify and prevent human rights infrigments in their business operations. While the Directive represents a crucial step towards more sustainable business practices, its compliance by corporate actors poses several challenges. Companies will need to invest in developing robust due diligence systems, which may require significant resources and expertise. These efforts will be needed to ensure effective supply chain mapping and stakeholder engagement, particularly for complex global supply chains. As a result, assessing and making decisions on how to protect fundamental rights and public interest goals required by these two instruments leads to cautiously evaluating how to comply with this broader and evolving regulatory environment.

Identifying AI Systems

At its core, the AI Act provides a structured framework to oversee the development, deployment, and use of AI systems across various sectors. It categorises AI systems into different risk levels, with stricter requirements for those deemed high-risk as defined in Chapter V. Indeed, the AI Act adopts a pyramid approach by identifying prohibited, high-risk, and less risky uses, also including GenAI systems. Notably, different sets of compliance measures are requested according to the area of risk in which an AI system is catalogued. As a matter of fact, some obligations are set only for ‘high risk’ systems, meaning those listed in Art. 6 and Annex III of the AI Act.

Central to the Act is also the classification of GenAI systems, which are defined as “an AI system which is based on a general-purpose AI model, and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems” (Art. 3, No. 66). Interpreting and applying regulations specifically tailored to GenAI systems present multifaceted challenges due to their inherent broad applicability. GenAI models, while essential components of GenAI systems, do not constitute AI systems independently. The rules of the AI Act apply to the GenAI models only when these models are nested within AI systems. Nonetheless, the obligations for AI models (with and without system risks) continue to apply alongside those for AI systems. They require additional components like a user interface to become fully functioning AI systems.

The obligations for providers of GenAI models apply once these models are placed on the market and continue to use even when integrated into an AI system. Specifically, if a provider integrates its AI model into an AI system and makes it available on the market, the model is deemed to have been placed on the market. These obligations do not apply when the model is used exclusively for internal processes that do not affect the rights of individuals or provide products or services to third parties.

Instead, GenAI models with systemic risks are always subject to the obligations under the AI Act due to their potentially significant adverse effects (Rec. 112). This interpretation would lie in the very definition of GenAI, pursuant to Art. 3 para. 1 no. 66, “an AI system which is based on a general-purpose AI model, and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems”. Hence, first and foremost GenAI systems are indeed AI systems, and when a general-purpose AI model is integrated into or forms part of an AI system, this integrated system should be regarded as a general-purpose AI system if it gains “the ability to serve a variety of purposes” from the integration (Rec. 100).

AI systems falling within the category of high-risk systems, including high-risk GenAI systems, are subject to extensive obligations among which the FRIA. This assessment requires deployers to take into account the protection of fundamental rights, thus leading certain public and private actors as deployers to consider the impact of AI systems on fundamental rights.

The Fundamental Rights Impact Assessment

The FRIA is a prognostic evaluation requiring the deployer to consider “the impact on fundamental rights that the use of such system may produce” (Art. 27 para. 1). Saying it differently, the FRIA, drawing from the previous experience with the DPIA under the GDPR, is establishing a mechanism for the deployer to evaluate ex ante, before the AI system is deployed, of any risks on the fundamental rights. Indeed, the FRIA is an assessment conducted prior to the use of the AI system intended to frame the risks represented by a given AI system. This measure not only aims to protect individuals from potential harm but also seeks to build public trust in AI systems by ensuring that they are deployed ethically and responsibly.

Even though the FRIA is an important proactive measure set to prevent potential abuses and unintended negative consequences of AI deployment, the AI Act only provides general minimum requirements. The true colour of this assessment is to create a system of accountability, as for the GDPR with the DPIA. Precisely, the deployer, namely the entity using an AI system under its authority, is required to provide documentation that contains the following:

  • Description: Detailed explanation of the AI system and its intended use.
  • Affected Individuals: Identification of individuals or groups affected by the AI system.
  • Risk Assessment: Analysis of potential risks to fundamental rights.
  • Mitigation Measures: Strategies to mitigate identified risks
  • Human Oversight: Measures for human oversight and control of the AI system.

As further specified by Rec. 96, the impact assessment should identify the deployer’s relevant processes in which the high-risk AI system will be used in line with its intended purpose and should include a description of the period and frequency in which the system is intended to be used. By relying on the documentation shared by the provider, the deployer needs to identify the natural persons and the groups of people that might be affected by the processing. Moreover, deployers should also determine measures to be taken in the case of the realisation and materialization of those risks. Furthermore, according to Rec. 96, the deployer shall arrange a review process of handling such potential infringement of rights. It is required to provide the natural persons with the safeguard of human oversight and “complaint handling and redress procedures, as they could be instrumental in mitigating risks to fundamental rights in concrete”. This process aims to empower individuals in the direct protection of their rights actively, thus requiring the design of this phase of compliance.

Once the deployer performs the assessment, this should be reported to the market surveillance authority and updated if necessary (Art. 27, para. 3). By imposing these stringent requirements, the EU is underscoring the importance of accountability and transparency in using AI technologies. Furthermore, the obligation to report these assessments to the market surveillance authority ensures oversight and that any identified risks are systematically addressed. This approach reflects a broader commitment to integrating ethical considerations into the development and deployment of AI, aligning with the EU’s wider agenda of promoting trustworthy AI.

Since the AI Act does not provide thorough guidance on how to conduct the assessment, as happened for the DPIA, it will be crucial to consult the guidelines and the templates elaborated by the European Authorities: in the case of the FRIA, the AI Act, assign this competence to the AI Office (Art. 27, para. 5). Hence, the provision of a standardised template for these assessments by the AI Office is a strategic move to ensure consistency and comprehensiveness in the evaluations. This will likely facilitate a more streamlined and uniform approach across different sectors, making it easier for public actors and organisations to comply with the regulations. Nonetheless, as scholarly analysis pointed out, it should be taken with adequate care the option of creating standardized templates due to the over-reaching and fuzzy conceptualization of fundamental rights, especially when such standards are created only by engineering and computer science experts, without the assistance of lawyers experts in the human rights field.

To improve assessments like the FRIA, it is crucial to integrate diverse backgrounds and expertise. This approach ensures that the tools are not detached from the fundamental rights legal framework and are not merely checkboxes devoid of engagement with technical complexities. Combining legal, technical, and ethical perspectives can create a comprehensive assessment methodology that genuinely addresses the nuanced impacts of AI systems on fundamental rights, fostering a more effective and human-centric approach. Other aspects that must be taken into consideration revolve around the development of robust methodologies for conducting FRIA, emphasizing risk quantification and management, as well as complementarity with other risk assessments, like DPIA, and maintaining a consistent approach to risk management all over the AI value chain.

Guiding the Fundamental Rights Assessment

The introduction of the FRIA in the AI Act stems from the need to balance innovation with the risks for fundamental rights. It represents a significant advancement in the regulatory landscape and sets a precedent for how AI should be managed and monitored. However, the haste with which this provision was introduced, its real connection with the rest of the Regulation, which, as mentioned above, has its core in the product safety-based regulations, and the way in which designated private and public actors will have to concretely carry it out are all questions that will have to be answered a year from the entry into force of the AI Act.

First, the AI Act does not mention the complex framework of fundamental rights that Member States need to take into account, being all of them also signatories of the European Convention on Human Rights (ECHR). This aspect reflects the need for balancing fundamental rights in European constitutionalism which does not only come from the constitutional approach of the Charter but also from the Convention, as well, and its Framework Convention on AI. Additionally, the absence of explicit mention of the criterion through which such rights should be balanced in their application might be a problematic element.

The second issue is determining who should conduct the assessment. According to the AI Act, the deployer must conduct the assessment before putting the system into service. Particularly, this ex-ante evaluation is the responsibility of deployers of high-risk systems in situations listed under Article 27. Moreover, it also establishes that, if the deployer needs to perform a DPIA, the FRIA referred to shall complement it, thus mixing up the compliance measures. This is a significant and potential backlash of the AI Act for at least two reasons. First, unlike a data controller, the deployer does not participate in the design and development phases, which are carried out by the provider. Second, the roles of the deployer and provider under the General Data Protection Regulation might be confused with respect to the different assessment methods. As a matter of fact, the obligation to perform these assessments often falls on the deployer, especially if classified as a “data controller” under data protection law, making them responsible for conducting the DPIA as well. However, if a provider has significant control over an AI system, such as with foundation models, data protection authorities might classify the provider as a data controller or potentially a joint controller with the deployer. This classification carries additional responsibilities and obligations under data protection laws.

Implications for GenAI

As mentioned above, GenAI can be considered to fall into the ‘high risk’ classification if they are used in sectors listed as such by Art. 6 and Annex III of the AI Act. Specifically, the conformity and mitigation assessment under the AI Act should be conducted to tackle any ‘systemic risk’. To understand what the purpose of such analysis is, it is crucial to look at Art. 55 para. 1 lett. b) together with the definition of ‘systemic risk’ provided by Art. 3. Under these lenses, it is possible to understand that conformity assessment under Art. 55 must regard “any actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”.

Hence, this conformity assessment, which also tackles the impact on fundamental rights, must be carried out when GenAI is used in high-risk AI systems or GenAI models that pose a systemic risk. Hence, while the provider of GenAI posing a systemic risk must complete a ‘conformity assessment’ under Article 55, the deployer will also be required to perform a FRIA if the AI is applied for a purpose classified as high-risk under Annex III. Thus, the AI Act mandates a conformity assessment focused primarily on the AI model itself rather than its specific application. Therefore, a double track of assessment would take place in which, on the one hand, the provider and, on the other hand, the deployer assess the even potential impact of GenAI on fundamental rights. The actual necessity of both instruments will have to be assessed in concrete terms since, in view of the reference in Art. 3 no. 65, the two would appear prima facie similar.

The FRIA, however, on closer inspection possesses a more precise qualification of what is to be considered a risk to fundamental rights. This assessment ensures that any use of GenAI respects fundamental rights by identifying and mitigating potential risks to these rights. The FRIA is a preventive measure to ensure that the deployment of GenAI does not infringe on individual rights or societal norms and establishes, even though with the need for specification, a non-closed list of elements that should be taken into consideration as a parameter.

As an ex-ante parameter, the FRIA will need to be complemented by the ‘conformity assessment’ conducted by the provider to address the effective risks and harms to various categories of people. It would be ideal for the provider and the deployer to establish a dialogue throughout the AI value chain to evaluate and respond to the “measures to be taken in case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms,” as per Article 27, paragraph 1, letter f). Without significant collaboration between the provider and the deployer, having a comprehensive view of the risks potentially driven by GenAI would be complex. This is crucial not only due to the inherent complexity of these models but also to ensure that risks are managed effectively at both the model and application levels.

Human Rights Due Diligence and the AI sector

While under the AI Act the obligation to conduct FRIAs applies to deployers of high-risk AI systems in order to manage significant ethical and legal challenges, particularly concerning the protection of fundamental rights, under the CS3D EU companies, whataever their operative sector, are addressees of an horizontal mandatory due diligence duty to proactively investigate their own impacts, included those occurring along their supply chains.

The Directive (Art. 1) enshrines obligations for companies regarding actual and potential human rights adverse impacts and environmental adverse impacts, with respect to: a) their own operations; b) the operations of their subsidiaries; and c) the operations carried out by their business partners in the chains of activities of those companies. Art. 3 provides for the definition of “chains of activities” in terms of the activities of a company’s upstream business partners related to the production of goods or the provision of services by that company, including the design, extraction, sourcing, manufacture, transport, storage and supply of raw materials, products or parts of products and the development of the product or the service, and the activities of a company’s downstream business partners related to the distribution, transport and storage of a product of that company, where the business partners carry out those activities for the company or on behalf of the company. Discharging human rights due diligence obligations (Art. 5) implies that in-scope companies shall, among other things:

  • integrate due diligence into their policies and risk management systems;
  • identify and assess actual or potential adverse impacts and, where necessary, prioritising actual and potential adverse impacts;
  • prevent and mitigate potential adverse impacts, bringing actual adverse impacts to an end, minimising their extent;
  • provide remediation;
  • carry out meaningful engagement with stakeholders;
  • establish and maintain a notification mechanism and a complaints procedure;
  • monitor the effectiveness of their due diligence policy and measures;
  • publicly communicate on due diligence.

The Directive incorporates thus the core minimum components of human rights due diligence as fixed in the UN Guiding Principles on Business and Human Rights and relying on a HRIA, on the adoption of the necessary measures stemming from the assessment, on their monitoring, and on the communication to the public.

Provided that they fall within the ratione personae scope of the Directive (Art. 2), companies operating in the AI sector are thus fully encompassed by the obligation to perform human rights due diligence. This means, at first, that the CS3D directive covers situations concerning the development of AI technologies where, for example, companies, must demonstrate that they have taken all the appropriate measures to ensure protection of human rights that may be potentially impaired by the utilization of AI, algorithm, machine learning, etc. This means, in addition, that the Directive shall apply also to the deployers of high-risk AI systems. Simply put, these latter risk, in the absence of a coordination among the two pieces of legislation, to be subjected to a double obligation of impact assessment: a first one under the AI Act and a second one under the CS3D directive.

Another core point at the intersection of the AI Act and the CS3D directive is the role attributed to having in place the policies and processes through which AI companies can ‘know and show’ that they respect human rights. ‘Showing’, in particular, involves communication to the public and providing a measure of transparency and accountability to individuals or groups who may be impacted and to other relevant stakeholders, including investors. AI companies, indeed, shall not only carry out human rights due diligence, but shall have to map their chain of activities and publicly disclose in their annual statements (Art. 16) relevant information “on the matters covered by the Directive”. This includes disclosing, inter alia, names, locations, type of products and services supplied, and informing the public as to who are subsidiaries, suppliers and business partners. From this side, requesting companies developing or deploying AI technologies to carry out due diligence on their impact on human rights, represents a pivotal measure helping them to increase the ‘explainability of their AI solutions’  and, accordingly, can prove to fill in a systemic gap in current regulation efforts on AI: i.e. the lack of sufficient information and transparency experienced by the wider public as to capabilities and impacts of AI.

Finally, the lack of compliance by AI companies of the due diligence obligation fixed by the CS3D Directive may originate their administrative liability, with the imposition of penalties (Art. 27), as well as their civil liability in instances where the company “intentionally or negligently” failed to prevent and mitigate impact, or to bring the impacts to an end and minimise their extent, and where this led to damage (Art. 29). Again, these provisions, risk to overlap with those of the AI Act, establishing a system of penalties for infrigments of the Regulation (Art. 99). At a close look, the risk there exists of a cumulative effect, at the very least of the twp penalties regimes, which may result in the infrigment of the ne bis in idem principle. Even in this respect, then, the two pieces of legislation shall require to be coordinated in order to avoid heavy burdens for companies and/or the violation of the rule of law in the EU legal system.

Conclusions

The AI Act and the CS3D directive impose stringent obligations on providers and deployers of AI systems and GenAI models, including extensive evaluations, risk assessments, and incident reporting. This legal framework, oriented to the protection of fundamental rights, can be seen as overly burdensome, but it aims to strike a balance between the development of AI technologies and the need to take into account fundamental rights. Nonetheless, this approach requires a dynamic and interdisciplinary system of compliance involving different expertise and monitoring of the processes. As the AI landscape continues to evolve, this regulatory framework will be essential in fostering a responsible and ethical deployment of AI technologies, the successful implementation of the AI Act will depend on effective collaboration between providers, deployers, experts and regulatory bodies, as well as continuous refinement and adaptation of compliance systems. In addition, the implementation of corporate human rights due diligence and FRIA requires a certain amount of coordination both at national legal system level and at company processes level, and this in order to avoid overlapping and duplication of obligations, as well as to fix eventual conflicting provisions. Finally, the very same methodology to use in order to carry out human rights due diligence and FRIA yet needs to be clarified. Putting them into one coherent legal framework may well help companies from of the AI sector, to implement effective corporate processes for discharging their human rights obligations.