Bias, transparency and explainability. Main challenges for ‘artificial intelligence’ in financial services.

Artificial intelligence or to be more precise – automation based on natural language processing, deep and machine learning – is one of the most beneficial and promising technologies for almost all parts of the economy. Advanced analytics, effective and precise prediction plus robotics and fraud prevention are important areas that may (and will) be game changers for those institutions that are open for innovation. We all know that deploying new solutions require budget, infrastructure and other resources necessary for effective implementation. In addition, and we cannot forget about that fact, other challenges like legal and regulatory issues will emerge during the process.

Widely understood artificial intelligence is not yet fully regulated, however, at the European Union level there is debate as to whether (and how) ‘adjust’ legislation and regulation to this emerging technology. Debate is quite interesting, and its outcome is promising. Both the European Commission and Council (with the European Parliament) are proposing certain actions (soft law, hi-risk AI regulation or EP’s resolutions) to find a proper solution. Now, however, we have a state of uncertainty for AI-systems as we don’t know how to apply even existing regulation (liability, general requirements).

Financial services, in particular banks activity, is highly ‘sensitive’ with all personal data, money and systemic importance topics. Therefore, more tailored approach is not only desirable but inevitable. This fact has been marked-up by the European Commission in its Digital Finance Strategy:

‘As a result, the Commission will invite the ESAs and the ECB to explore the possibility of developing regulatory and supervisory guidance on the use of AI applications in finance. This guidance should follow the upcoming proposal for a new regulatory framework for AI planned in 2021’

Challenges are now, not tomorrow

Wider application of AI within the financial services sector will be a big challenge not only for financial institutions but also supervisors (regulators) on many levels. If we refer to the proposal for digital operational resilience regulation we will see that many requirements will ‘touch’ also AI-systems. If we add NIS2 and cybersecurity threats and challenges it will become even more interesting. We cannot forget about product liability that, according to the European Commission and the European Parliament, should ‘fit’ AI issues as well.

This is not the end. More issues are emerging if we add regulatory component. Starting from the European Banking Authority and its two documents:

1.    report on Big Data and Advanced Analytics and

2.    draft guidelines on loan origination and monitoring,

No alt text provided for this image

we will get a picture of potential challenges that financial institutions will have to face in their road to digital transformation. This is, however, just a piece of big cake to eat. We have also Regulation 2017/589 that imposes additional organizational requirements on investment firms involved in algorithmic trading. Some supervision authorities (including Polish Financial Supervision Authority) has decided to ‘add more’ to robo-advisory providers.

More and more services are using sophisticated algorithms to get better results (take a look at this paper by Bank for International Settlements correlation between machine learning and better credit scoring) while not always ensuring a sufficient level of protection (of customers and those institutions). Why? Because not everything is clear.

Are we done yet?

Not yet. In case of automation and profiling (especially) the most important thing is transparency, explainability and non-discrimination (or algorithmic bias). Many levels of EU regulation require entities to apply such rules (requirements?) irrespective of technology used and service provided.

No alt text provided for this image

When it comes to AI it is even more challenging as soft law by European Commission and other bodies and institutions clearly recommends applying only TRUSTWORTHY AI. I don’t want to elaborate on many documents ‘produced’ by EU (ENISA, Data protection bodies, EP and so on) and Council of Europe but one can be said – algorithms used for commercial and non-commercial should enable:

1.    Revision of decision process – explainability.

2.    Data used for particular decision and training – traceability.

If you are looking for more details, please click here for list for trustworthy AI.

In addition, algorithm should not create a risk of bias or discrimination. Recently published guidelines by the Council of the European Union (not only) are highlighting a need for non-discriminatory execution of AI-systems. This can be achieved – partially – by application of above-mentioned elements of transparent AI and constant monitoring but it will not work always as advanced algorithms may learn quite fast and in surprising way.

We can also add other elements (challenges) like cybersecurity and data protection, but this will be a part of next article.

What institutions should do? What authorities should?

Ethics by default and design. All principles for trustworthy AI should be a part of each (and early) step of implementation of AI-systems. Robust organizational and technical aspects should also be included. On-going monitoring will be inevitable.

Authorities’ perspective is even more interesting. I bet that in the future SREP (Supervisory Review and Evaluation Process) will also include audit of AI-systems. This will require authorities to provide and secure resources – not only budget but people and expertise in new technologies, including legal and regulatory issues. This will be a big challenge for all but – without a doubt – with profits for institutions, supervisors and customers.

All opinions expressed herein are solely mine.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *