Select Page

This article was first published 2nd Feb 17 on https://www.btrm.org

Treasury Tech. Part 1: Is Your Treasury Data Science Ready?

Welcome to Part 1 of a three-part series of Treasury Notes focused on Financial Services Technology (FinTech) and how developing trends will present opportunities and challenges for Treasury to consider. Our monthly series will cover Data Science, AI, financial services Start-Ups, Blockchain and how these interlink with regulation to require treasury to up its tech game.

First up is data science

We live in an information age, and we work in an information industry. At its core banking has always been about having an edge in the use of technology and data to manage business risk, income, balance sheet, clients etc. Until a decade ago banks were at the leading edge but as the pace of technology development has increased exponentially and our business has become more complex and silo focused (by acquisition or organisation) we no longer lead the pack.

Regulatory developments since the global financial crisis further demand that banks source, manage and report data more efficiently and in some instances (clearing, PSD2) will demand open data banks to be exposed to other market participants. These issues in particular could present challenges in liquidity management and client retention and in some instances might prompt a strategic decision around some business areas.

The changes in our industry simply mean that Treasury is now closer and more directly linked to client activity at a granular level so data analytics have an increasing role and value add.

Treasury thrives on data, as practitioners we not only use it to measure our liquidity, capital and regulatory position – e.g. types of liabilities, stickiness, HQLA requirements, but with advances in data science we can now apply advanced techniques to identify changes in behaviours as early warning indicators and use predictive analytics to better manage risk.

To achieve this we need to Access, Clean, Expose and Analyse the data available inside our organisation – the trouble begins when we realise that our technology architecture has grown up in a way that makes this extremely difficult. Data may be held in silo systems, it may be aggregated by current processes thereby losing the valuable granular aspects, it might be duplicate, stale, incomplete or simply maybe a department won’t play ball in sharing their data bank.

What’s needed to address these issues is a “data community” to help harness the skills and power inside an organisation. The community may guide practices around capture, quality, storage and use of the wealth of insight and information for the good of the whole firm. This would include setting standards for systems procurement, data privacy, and data science approaches, and vitally, create an environment for experimentation.

Treasury Wins – The Whole Team Wins

In Treasury the use of dashboards to track movements across the balance sheet has increasingly become BAU. Feeds from core banking systems can identify – although often not without human / excel intervention – behaviours and flows on an aggregate basis of each client and liability type. These dashboards are used by treasury to not only manage daily flows but also to budget balance sheet activity required in the coming days, weeks or cyclical period. This is an information based business management tool – great – but often large unexpected or uninformed moves can happen or a change in a client, segment, product or country can creep up, unnoticed until an issue already exists.

Applying consistent standards across and organisation driven by your “data community” can help advance these dashboards by applying deep learning and big data techniques to spot individual client behaviours that differ from “usual” and “learn” to look out for these being repeated in a pattern that could alert Treasury to changes in quality, price, stickiness and even suggest the root cause (e.g. client industry segment, location, change in ratings outlook, interest rates etc.).

Imagine the benefits to a funding plan and liquidity management approach of activity observations in real time. This enables improved preparedness by being hugely better informed without the need for many excel spreadsheets and reliance on human data mapping.

Becoming more effective at cost and risk management across capital and liquidity is possible and increasingly better informed strategic plans may then be formulated.

Where to start?

We use the term “data community” to encourage the crowd sharing of best practice and ideas across your organisation. Treasury should be a key partner in this community providing a guide on all aspects of balance sheet and planning considerations. You may find that the community reveals already advanced data science and analytics techniques exist within your company – but have to date only been applied to a very speci c task. Your community should encourage these resources to contribute helping solve problems, but you shouldn’t expect them to be able to solve everything.

Taking a look at the Trading businesses that have developed rapidly over the last decade in the use of algorithmic models and systems (think FX, rates businesses) which they apply to risk and pricing techniques internally and for clients. Delve into your client businesses in corporate and retail banking, their data might include deep analytics which spot changes in client behaviour or circumstances to trigger marketing calls or risk indicators (think payroll changes, mortgage offers, deposit balance changes etc.).

You may find that small teams of data scientists reside in these business lines who, with the right support and encouragement, can share experience and know how to truly transform how your organisation Accesses, Cleans, Exposes and Analyses data.

The data community should include members who are responsible for compliance and regulatory implementation. These officers can ensure that you are up to speed on the mountain of requirements and you may discover that the approach that they require to meet regulatory standards surfaces data quality that is of use to the whole community and, once exposed, data scientists can very rapidly develop models which may fit multiple business needs.

What do our four steps mean in practice?

Access

Identify where your data is held, in what format, and how accessible this is. You will likely find that in some instances there is a completeness or duplication question to which a solution may be applied. The data community should establish standards for future platform architecture, data formats etc. to ensure accessibility improves over time (within the bounds of data privacy requirements).

Clean

Where necessary adopt techniques that clean, organise, or complete data sets that do not require establishing an expensive (and human) data warehouse approach. In this step identify processes, data sets and EUCs that can be replaced in future.

Expose

Expose your first, experimental, data set to your community (again with data privacy boundaries considered). This enables the community to learn what is now accessible and how it might be used across the organisation. New models can then be applied.

Analyse

Drawing upon the data science experts (internal or external) working with Treasury and business to establish models that not only seek to increase efficiency of existing business needs but also source new, informative, dashboard outputs that may previously have not reached the risk owner.

Don’t try to change the world in a day. Multi-year – multi-track – projects of the last decade or so tell us that we will likely be disappointed by the results as the needs of the organization or available technologies may have moved on before longer term projects are implemented. Encourage the community to experiment, find a data set that is accessible and clean enough today to expose to an analytics model and learn from this. The whole community will benefit from this and experimentation will surface good and bad approaches quickly.

A larger community, drawing on external partners to test and apply data science techniques can be a very rewarding approach. This reduces overhead, scales up computational power, slices through potentially conflicting internal priorities (after all we all have lots on!) and it could also inform your in-house community of the very latest techniques in data science applications. There are some great service providers of these services that work on bespoke projects, and those that run AI hubs into which you may throw test cases to prove your models. All worthy of consideration.

Another benefit of collaborating with vendors or FinTech start-ups which offer specific solutions is that it enables access to skill sets that may not currently be available inside your business unit or organization. In future banks, and Treasury, will increasing add these engineering resources to the team – but the power brought by collaboration with the external community, as well as the internal, should be high on the agenda.

Treasury can massively benefit from applying data science and machine learning techniques. It can help provide insights faster and earlier than ever, reduce manual work by applying coded models in place “EUCs”. Becoming future proof by enabling very rapid development of new techniques to adapt to a changing environment will become BAU.

Is your Treasury data science ready?

About the author

David Castle is the Managing Partner at Waltham Partners Ltd

[email protected]

This article was first published 2nd Feb 17 on www.btrm.org

© www.btrm.org

Share This

Share this article with your colleagues/friends!