Dhritiman Mukherjee, Managing Partner, Financial Services Industries, DXC
Technology

Dhritiman Mukherjee, Managing Partner, Financial Services Industries, DXC
Technology

Dr Marc Brogle, Principal Advisor, Modernization & Transformation, DXC
Technology
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024, with obligations phasing in between February 2025 and August 2026, with some elements extending to 2027. It is the first binding, horizontal AI regulation globally and applies to any Financial Services Institution (FSI) that develops, buys, or uses AI systems in the EU—regardless of where the AI provider is located.
Impact of EU AI Act
The EU AI Act uses a risk-based approach distinguishing between different risk levels: unacceptable risk (prohibited AI practices), high risk, and limited (minimal or low) risk.
AI systems posing unacceptable risk are banned from use by FSIs. Annex I and III of the AI Act, requires FSIs to focus particular attention on high-risk AI applications that involve credit scoring, risk assessment and pricing that could potentially discriminate against customers. These systems face stringent requirements to ensure transparency, accountability, and bias mitigation.
Lower-risk applications must also undergo proper assessment, particularly when they involve direct interaction between AI systems and end users. In such cases, it is essential to clearly inform users that they are engaging with an AI system. While not specifically mandated by the AI Act, it is nonetheless good practice to strengthen the trust of users and customers in AI-driven with lower risks.
One effective approach is to have personalized financial advice tools provide clear and easily understandable explanations for their recommendations, helping to prevent confusion or misinterpretation. Similarly, AI systems used in investment management should rely on unbiased data and present insights in a transparent and accessible manner, enabling users to make well-informed decisions.
Benelux FSIs are particularly exposed because:
For Benelux FSIs, the EU AI Act is less about stopping AI use and more about professionalising AI governance—embedding transparency, accountability, and control into systems that already sit at the heart of decisions.
Key requirements for high-risk systems
Challenges in Complying with the EU AI Act
Strategic Recommendations
To successfully navigate the EU AI Act, FSIs should consider the following:
Conclusion
The EU AI Act presents both challenges and opportunities for FSIs in Benelux. By understanding and adhering to the requirements, FSIs can leverage it as a catalyst for innovation and ethical AI deployment. They must act now to align their AI strategies with the regulatory demands. Starting with thorough audits of AI systems, establish stringent governance frameworks, and invest in continuous monitoring and staff training. Proactive measures today will ensure compliance and pave the way for ethical and transparent AI implementation.
DXC Technology plays a crucial role in supporting FSIs through this transition. With our expertise in AI, risk & compliance, DXC offers comprehensive solutions to help FSIs navigate the complexities of the EU AI Act. From conducting readiness assessments and developing GRC frameworks to providing training and ongoing compliance monitoring, DXC ensures that FSIs are well-equipped to meet the Act’s requirements and leverage AI’s trans-formative potential.