The EU AI Act and its Impact on Financial Services Institutions in the Benelux

Posted by: Zaheer Abbas May 15, 2026 No Comments
DM mugshot

Dhritiman Mukherjee, Managing Partner, Financial Services Industries, DXC
Technology

Dr Marc Brogle, Principal Advisor, Modernization & Transformation, DXC
Technology

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with obligations phasing in between February 2025 and August 2026, with some elements extending to 2027. It is the first binding, horizontal AI regulation globally and applies to any Financial Services Institution (FSI) that develops, buys, or uses AI systems in the EU—regardless of where the AI provider is located.

Impact of EU AI Act

The EU AI Act uses a risk-based approach distinguishing between different risk levels: unacceptable risk (prohibited AI practices), high risk, and limited (minimal or low) risk.

AI systems posing unacceptable risk are banned from use by FSIs. Annex I and III of the AI Act, requires FSIs to focus particular attention on high-risk AI applications that involve credit scoring, risk assessment and pricing that could potentially discriminate against customers. These systems face stringent requirements to ensure transparency, accountability, and bias mitigation.

Lower-risk applications must also undergo proper assessment, particularly when they involve direct interaction between AI systems and end users. In such cases, it is essential to clearly inform users that they are engaging with an AI system. While not specifically mandated by the AI Act, it is nonetheless good practice to strengthen the trust of users and customers in AI-driven with lower risks.

One effective approach is to have personalized financial advice tools provide clear and easily understandable explanations for their recommendations, helping to prevent confusion or misinterpretation. Similarly, AI systems used in investment management should rely on unbiased data and present insights in a transparent and accessible manner, enabling users to make well-informed decisions.

Benelux FSIs are particularly exposed because:

  • They are heavy users of AI in credit scoring, AML, fraud detection, and customer interaction
  • They operate crossborder, often acting both as AI deployers and providers.
  • Supervisory scrutiny is traditionally strong (ECB, EBA, national regulators), accelerating enforcement expectations.

For Benelux FSIs, the EU AI Act is less about stopping AI use and more about professionalising AI governance—embedding transparency, accountability, and control into systems that already sit at the heart of decisions.

Key requirements for high-risk systems

 

Challenges in Complying with the EU AI Act

  • Data Quality and Bias: Ensuring that AI systems are free from biases and operate on high-quality data. Poor data quality leads to inaccurate outcomes and biases, which the EU AI Act aims to mitigate.
  • Transparency and Explainability: The act requires that AI systems be transparent and explainable, so users understand how decisions are made.
  • Continuous Monitoring: The need for ongoing monitoring and periodic
    reviews to ensure systems remain compliant requires dedicated resources and advanced monitoring tools.
  • Integration with Legacy Systems: FSIs often operate with legacy systems that might not be compatible with new AI regulations. Compliance measures can be technically difficult and resource intensive.

Strategic Recommendations

To successfully navigate the EU AI Act, FSIs should consider the following:

  • Conduct Comprehensive Audits: Categorize AI applications by risk levels, conduct thorough audits of AI systems to assess readiness and compliance and document these audits meticulously.
  • Develop Robust Governance Frameworks: Implement a strong governance framework that includes risk management, data governance, and compliance accountability. This framework should continuously evolve based on new regulations and risks.
  • Ensure Transparency and Explainability: Maintain detailed documentation of AI models and clearly communicate AI interactions to users. Implement tools that enhance the explainability of AI decisions.
  • Engage in Continuous Monitoring: Establish mechanisms for real-time monitoring and periodic reviews of AI systems. Develop feedback channels for users to report issues and refine AI systems accordingly.
  • Provide Training and Education: Invest in training programs that cover AI compliance, ethical practices, and technical skills. Ensure that all employees understand the EU AI Act’s requirements and their roles in maintaining compliance.

Conclusion

The EU AI Act presents both challenges and opportunities for FSIs in Benelux. By understanding and adhering to the requirements, FSIs can leverage it as a catalyst for innovation and ethical AI deployment. They must act now to align their AI strategies with the regulatory demands. Starting with thorough audits of AI systems, establish stringent governance frameworks, and invest in continuous monitoring and staff training. Proactive measures today will ensure compliance and pave the way for ethical and transparent AI implementation.

DXC Technology plays a crucial role in supporting FSIs through this transition. With our expertise in AI, risk & compliance, DXC offers comprehensive solutions to help FSIs navigate the complexities of the EU AI Act. From conducting readiness assessments and developing GRC frameworks to providing training and ongoing compliance monitoring, DXC ensures that FSIs are well-equipped to meet the Act’s requirements and leverage AI’s trans-formative potential.