On February 13, our CEO of Confiance IA had the pleasure of participating in the panel: Audit and Governance of Machine Learning Models, during the 6th edition of HEC Forecast, organized by the Data Science Committee of HEC Montréal.
This particularly interesting panel brought together various perspectives and expertise:
- Marie-Pierre Habas-Gerard, CEO of the industrial consortium Confiance IA
- Lise-Estelle Brault, Senior Director, Data, Digital Transformation, and Innovation, Autorité des marchés financiers
- Alexandre Bercovy, President of the Montreal chapter of ISACA
- Moderator: Michael Albo, CEO, Data Science Institute
Several objectives were sought, including reflecting on the governance framework for AI, particularly machine learning, considering all the required disciplines in the design of these models, and the AI lifecycle within the financial industry. The panelists discussed the minimum governance and technology mastery criteria that financial organizations should meet and how and why the AMF aims to leverage these new technologies.
The emergence of regulatory frameworks for AI systems (Artificial Intelligence Act in Europe, ISO 42001 Standard worldwide) also adds constraints to the financial industry in terms of risk classification, transparency requirements, and traceability. So, how can we remain agile in this context, considering that Artificial Intelligence is not yet an exact science, but the financial industry, like others, wants to use AI in production?
One very interesting aspect addressed during this panel was the issue of responsibility related to the implementation of an AI model: whether it’s the business teams responsible for managing the risks of these models as owners of automated processes, or the compliance teams that may lack knowledge of these subjects, with responsibility often transferred to data scientists and IT teams. So, how can we improve this more than necessary continuum in a daily routine often confined to silos and lack of communication between teams? What characterizes an effective control system within organizations?
It is now crucial for organizations to work on qualifying and even quantifying the level of transparency towards consumers and the public, promoting the auditability and interpretability of recommendations from an AI-based solution. Similarly, it will be necessary to strengthen the security and cybersecurity of AI solutions in a systematic and controlled manner right from the design phase. Cybersecurity is an inseparable discipline from AI engineering, just as it should be in any software engineering cycle.
Feel free to contact us to learn more about Confiance IA’s pillars in robust, secure, sustainable, responsible, and ethical AI. We remain open to your ideas or projects in Artificial Intelligence.
Thanks to HEC for the invitation and thanks to the various panelists and Michael Albo for the richness and quality of the discussions.