Note: This article is a transcript of a presentation than has been made during London STAC Summit in May 2015. It gives the vision we had at that time of the potential impact of Big Data in Capital Market. Our focus on R&D is to identify the innovative technology that competitive advantage for Financial actors.
Today’s FO Operational Challenge
For years, banks have heavily invested in technology in order to follow the acceleration of Capital Market: exponential development of products, increase of volume, latency, diversity of sources.
In order to stay competitive, banks must now introduce new services based on a global view of the markets by federating all structured and unstructured dataset.
In this new environment Front-Office executive should be able to have their global position valorized in real-time to efficiently manage balance sheet and collateral in compliance of risk policy. Intraday profitability indicators and customer interests will join Credit Line, LVA and CVA to sales arsenal.
With such system, enriched with data from strategists and social media, trader, sales and risk teams will be able to pro-actively act on market events and trends such as Swiss-Franc-euro “Black Thursday”, Ukraine or Greek crisis.
With the incredible complexity of their information system, Banks are facing a tough decision: rebuild their expensive front-office architecture or give up the competition.
Our Big Data platform is based on open source technology and proposes a cost effective alternative for bank’s digital transformation.
Our platform is fully integrated in HBase, the NoSQL wide column data store of Hadoop. It proposes a flexible and agile way to deliver big data project to development and business teams.
This extreme flexibility is due to an OLAP cube and dynamic data schema that allow building live reports with a 360° view of the position directly on production environment and very large dataset.
The solution leverages the in-memory distribution framework of HBase for storage and computation.
A low latency bus based on Apache Kafka offers extra fast position and analytics.
The solution is fully scalable; no lock has been introduced into the system. A read-isolation mechanism ensures the external-consistency of the distributed transactions.
The technical architecture for FO On-Demand analysis puts into action many core components of Hadoop:
- Connectors such as Sqoop or Kafka Connect that trigger the search and indexation engine, which is based on lucene, for automatic classification and correlation;
- Dynamic Data Schema with schema on read and schema on write strategy for fast storage and instant access to data. Full-Coherency of the system is ensure at any time by the read-isolations mechanism that protects processes from each other;
- Low latency bus and OLAP cube for real-time distributed position processing inside your Hadoop cluster;
- Push architecture and rest services will stream data up to consumers like trading desk, micro services or Excel.
We offer a wide range of components that seamlessly integrates into your Hadoop cluster. Those components aim to close the gap between open source frameworks and the expectation of financial industry.
Beyond FO use cases, As Of Date, Full-Audit, Dynamic Schema, Index and Search and machine learning perfectly fit to compliance and anti-fraud requirements.