By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

The Impact of the 2024 EU AI Act on the Financial Services Industry

August 12, 2024
6 min. read

AI is here, and it's real. It's rapidly weaving itself into the very fabric of our lives. In many aspects, AI already serves as an invisible architect of our daily experiences, and the parts of society it hasn't yet touched will likely feel its influence soon.

This transformative technology promises to restructure every facet of our societies. Yet with such staggering power comes equally monumental risk. It's therefore no surprise that the European Union has stepped forward, recognising the urgent need to regulate this digital juggernaut before it outpaces our ability to govern it.

Driven by these swift advancements and potential risks, the European Commission has established a comprehensive regulatory framework. This framework, initially introduced in 2021, culminated in the adoption of the EU AI Act on March 13, 2024. Entered into force on August 1, 2024, this landmark legislation aims to address the complex challenges posed by AI while safeguarding innovation and protecting fundamental rights.

This piece of legislation is a regulation, meaning that the same standards apply across all EU member states. This regulation impacts all AI systems that are used or providing outputs within one of the 27 member states of the European Union. 

The EU AI Act has a broad scope that extends beyond just AI providers. Companies that use AI systems internally, even if they don't develop or sell AI products, also have a range of obligations under this legislation. 

Importantly, the Act's reach isn't limited to organisations based within the EU. Any company doing business in the EU market, regardless of its home country, must comply with these regulations if they use or provide AI systems. 

This means that organizations all over the world, not only within the EU, are potentially impacted by the EU AI Act. The global nature of modern business, combined with the EU's significant market size, ensures that this legislation will have far-reaching effects on AI governance and usage across many industries and countries.

Various industries will have to adapt to the requirements laid out in the EU AI Act. The financial services sector will also feel the impact of the AI revolution and the recently adopted AI Act and will have to make adequate adaptations in order to comply with the AI Act.

What challenges lie ahead for compliance officers under the AI Act? Is there cause for concern? And what does this new regulation mean for European financial institutions? This article offers valuable insights for the financial services sector in Europe.

Risk-based approach

The EU AI Act employs a risk-based framework to regulate AI systems across Europe. Under this framework, AI systems are categorised according to their associated levels of risk towards the end-user, which are delineated into four categories: unacceptable risk, high risk, limited risk, and minimal risk. Table 1 provides an overview of this risk classification scheme.

The risk levels of the EU AI Act with examples
Table 1: Risk classification under the AI Act

Risk classification

How are these risk categories defined? And what falls into these categories for financial institutions? 

Unacceptable Risk

This type of risk represents a fundamental challenge to the basic human rights and values of the European Union and is completely forbidden. Social scoring, which categorises humans based on their social behaviour, is a good example of this type of risk and is entirely prohibited under the AI Act.

High Risk

This category of AI systems requires the most stringent compliance measures under the EU AI Act. These are systems that have a large potential impact on health, safety or fundamental rights of EU citizens and businesses.

For such high-risk AI systems, it is required to perform a comprehensive risk assessment and an in-depth risk mitigation plan. It is crucial to ensure transparency, accuracy and fairness within these AI systems. 

The compliance requirements of high-risk AI systems include all of the following points: 

  • Keeping detailed documentation of the AI system
  • Ensuring a high quality of data input into the AI system
  • Implementing technical documentation of the AI system
  • Logging all processes and events related to the AI system
  • Registration with the relevant AI body in the member state
  • Transparency and provision obligations towards users
  • Implementing a robust mechanism for human oversight

A good example of this type of risk is internal hiring processes and the processing of job applications using AI-powered systems. 

Limited Risk

This risk category of AI systems has limited impact on end-users and only requires transparency towards the end-user as a compliance requirement as set out in the EU AI Act. The main requirement is that the end-user must be informed on how AI is being used when interacting with the AI system. An AI-powered chatbot is an example of a system that has limited risk under this AI Act.

Minimal Risk

This category of risk of AI systems has little to no impact on the end-user. This type of AI system is unregulated. A code of conduct is sufficient to comply with the EU AI Act for these kinds of AI systems. Examples of this are spam filters or automated emailing systems.

Timeline

The AI Act was formally adopted by the European Parliament on March 13, 2024, following its initial introduction in June 2023. Its official entry into force was on the 1st of August 2024.

The following deadlines apply regarding the compliance for companies: 

  • 2 February 2025 - 6 months after entry into force:some text
    • Unacceptable AI risk systems are completely prohibited.
  • 2 August 2025 - 12 months after entry into force:some text
    • Each member state will appoint competent authorities to oversee the AI systems.
    • All obligations enter into effect for providers of General-Purpose AI Models (GPAI).
  • 2 February 2026 - 18 months after entry into force:some text
    • Post-market monitoring is implemented by the Commission
  • 2 August 2026 - 24 months after entry into force:some text
    • All obligations for high-risk AI systems are implemented.
    • Penalties are implemented by member states
    • Each member state must have at least one regulatory sandbox.

The complete timeline for the Act's implementation and rollout, extending until mid-2026, is detailed in Table 2. This table outlines the subsequent phases and key steps involved in the Act's deployment.

The timeline of the EU AI Act
Table 2: Timeline of the EU AI Act

Challenges

The main challenge posed by the EU AI Act lies in accurately determining the risk category and level for each internal process within organisations. This entails a thorough understanding of the classification criteria and an in-depth analysis of the potential implications for compliance and operational adjustments. Organisations must rigorously evaluate their AI systems to ensure they meet the regulatory standards required by the EU AI Act.

The second challenge is the assessment of compliance requirements laid out in this Act. The Commission has opted for various options of self-assessment for companies. While this approach offers flexibility, it also introduces a degree of ambiguity in interpreting and following compliance measures. 

Implications for Financial Institutions

Financial institutions are expected to be significantly affected by the EU AI Act. The regulation specifically mentions two AI applications in the financial services industry that will require special attention:

  • Credit scoring: systems that analyse the creditworthiness of customers using AI algorithms
  • Risk assessment tools for the insurance industry 

Both of these applications can be considered high-risk and subsequently require extensive compliance measures as laid out in this AI Act.

In addition to the direct implications for financial services, financial institutions must also consider the broader use of AI applications within large organisations that may intersect with their day-to-day operations but are not unique to the financial sector. 

For instance, AI-driven tools used for HR processes, such as candidate screening, or internal systems powered by artificial intelligence, also fall under the scope of the EU AI Act. These AI applications, while not exclusive to financial services, must still adhere to the regulatory standards established by the Act, ensuring compliance across all AI-driven operations within financial institutions.

Associated fines

Non-compliance with completely prohibited AI practices may result in fines of up to €35 million or 7% of annual global turnover, whichever is higher.

Violations related to high-risk AI systems may result in fines of up to €15 million or 3% of annual global turnover, whichever is higher.

If organisations supply incomplete, incorrect or misleading information to authorities, they will be fined €7,5 million or 1% of annual worldwide turnover, whichever is higher.

For startups and SMEs, the specified fines apply, but the lower of the two possible amounts will be enforced rather than the highest.

Each Member State of the EU is responsible for implementing its own sanctions policies and enforcement measures.

How to become compliant?

Becoming compliant with the EU AI Act is a challenge for organisations. Especially for organisations using or deploying AI systems that are considered by this regulation to be high-risk, the compliance requirements are extensive and complex.

At Sailpeak, we have developed our own framework through which we help institutions be entirely compliant with the EU AI Act by following the following few steps: 

  1. Identification, assessing and mapping of all internal systems and processes that use artificial intelligence in addition to prioritising risks.
  2. Implementing risk mitigation strategies based on their category delineated in the EU AI Act.
  3. Communicating internally on the mitigation strategies and establishing clear reporting standards.
  4. Monitoring the implementation and effectiveness of the framework and continuously improve the internal AI governance.
  5. Implementing a robust AI governance and management framework and strategy to the entire organisation.

Moving forward

The passage of the EU AI Act represents a critical milestone for the innovation, development, and deployment of artificial intelligence within the European Union. Organisations from all over the world are impacted by this regulation. 

Financial institutions will have to implement rigorous standards and stringent oversight in order to shape the future of AI within their organisation. Compliance with the AI Act requires robust governance frameworks, transparency and human oversight.

As artificial intelligence becomes indispensable in the financial services industry, proactive compliance and a culture of accountability are crucial. By embracing these principles, financial institutions can not only meet regulatory requirements but also leverage AI's full potential, emerging as industry leaders. The time to act is now.

White arrow
Contact
White arrow
You need our help?

Sailpeak is here to support your organisation in achieving compliance with the EU AI Act. Our expert team will guide you through the implementation process, ensuring all necessary steps are taken to meet regulatory requirements.

Reach out to us today, and we will promptly respond to assist you.

People working behind a laptop