June 22, 2017
Forget Big Data; You Need Fast Data
You need to be able to apply big data analytics in near-real or even real time while engaging with a customer or another computer.
In 1989, Queen released a very successful single called “I Want It All.” The opening repeats the song title twice, then changes subtly to “and I want it now!” This could be a battle cry for today’s fast-moving society.
We’ve all come to expect a rapid response to our requests for service, and we’ve become impatient with those who can’t deliver. We even watch kettles heat up and wonder why they take so long to boil, and we stand and complain about queue lengths.
Whereas consumers might take some comfort (or the opposite) in knowing that most companies they deal with hold vast amounts of data about them, all of this data is historic and, actually, very little is used productively. Yet we are increasingly engaged in real-time conversations with companies either via a mobile app, our PCs or the good old-fashioned telephone, providing real-time data about a need or a problem. So why aren’t companies, by and large, capturing and acting on that data in real-time while they are interacting with their customers? The simple explanation is that acting on data captured in real time is beyond the means of most of the systems built by these companies, and it’s not a trivial matter to change, given that this inevitably means tinkering with legacy systems.
See also: Producing Data’s Motion Pictures
But there is a solution in sight, and it’s called “fast data.”
Fast data is the application of big data analytics to smaller data sets in near-real or real time to solve a problem or create business value while engaging with a customer or another computer. Fast data is not a new idea, but it’s going to get very important to embrace fast data.
A Fast Data Architecture
What high-level requirements must a fast data architecture satisfy? They form a triad:
- Reliable data ingestion.
- Flexible storage and query options.
- Sophisticated analytics tools.
The components that meet these requirements must also be reactive, meaning they scale up and down with demand, are resilient against the failures that are inevitable in large distributed systems (we don’t want any failures on autonomous cars!), always respond to service requests even if failures limit the ability to deliver services and are driven by messages or events from the world around them.
The chart below shows an emerging architecture that can meet these requirements.
The good news is that you can graft such an architecture on top of legacy systems, which is exactly what ING has been doing.
Unlocking valuable intelligence
Back in the halcyon days, banks were very close to their customers. They knew customers intimately and treated them personally. With the proliferation of customers, products and channels, though, this intimacy has been lost. ING wanted to recapture the “golden era” with a global strategy to make the bank more customer focused, “mobile first” and altogether more helpful.
A typical bank these days captures and processes billions of customer requests, instructions and transactions. In doing so, they capture and store vast amounts of customer data – but, and here’s the startling truth, few (if any) of the major banks use this data effectively for the benefit of their customers.
ING appointed a manager of fast data, Bas Geerdink, to address this problem. His broad international remit is to create a truly customer-friendly, omni-channel experience. To kick start this process, he turned his attention to ING’s vast but disparate data stores, as he was convinced they could unlock valuable intelligence. Historical data can often reveal customer behaviors and trends that are crucial to predictive analytics. For example, past data can be used to plot future pressure points on personal finances – e.g., key payment events can be anticipated and mitigated with predictive analytics.
However, mining this data presents major challenges. Most banks are hampered by disparate and disconnected legacy applications that cannot operate in real time. Confronted with this dysfunctional problem, ING made some fundamental decisions:
- Create a single, secure data lake.
- Employ a variety of open source technologies (along the lines of those shown in the chart above). These technologies were used to build the over-arching notifications platform to enable data to be captured and acted on in real time.
- Work with the legacy application teams to ensure that critical events (during a customer’s “moment of truth”) are notified to this fast data platform.
- Trigger two vital platform responses: a. Instantly contact the customer to establish whether help is urgently needed (for example, to complete a rapid loan application); b. Run predictive analytics to decide whether the customer needs to be alerted.
The future role of banks
Partly in response to the Open Banking directive, the bank is now opening up its data to third parties who have been authorized by customers to process certain transactions on their behalf (e.g. paying bills). This is a fascinating development with potentially far-reaching implications. It raises a question about the future role of banks. For example, would the rise of nimble, tech-driven third parties reduce banks to mere processing utilities?
ING is determined not to be marginalized, which is why it has invested in this fast data platform and its building real-time predictive apps – both on its own and with third parties (such as Yolt). It is a bold and very radical strategy – and, not surprisingly, it raises some searching questions.
Hearing this story made me wonder what types of customer would most welcome this kind of service, and was there any risk of alienating less technology-literate customers?
The bank doesn’t yet have definitive answers to these questions. However, ING is adamant that all technology-driven initiatives must have universal appeal, and that is why ING is introducing change on a very gradual, phased basis.
In the first instance, ING is testing these services on employees of the bank and then on beta test groups of (external) customers. To date, feedback has been extremely positive, and this has encouraged the bank to keep investing. However, Bas emphasizes the need to appreciate customer sensitivities and preferences. For example, there is a fine line between providing a valued service and becoming intrusive – that is why the bank specifically considers factors such as the best, most receptive time of day to make interventions (if at all).
Fraud detection is another intriguing development where fast data is having a significant impact. At the moment, traditional fraud detection systems often lack finesse. When a customer attempts to use a credit card, it can trigger a false positive 90% of the time (or even more). This can be inconvenient both for the bank and especially for the customer (although a false positive is not always perceived in a negative way – it shows the bank is checking money flows). ING is hopeful that its fast data platform will radically reduce the level of false positives as well as the level of fraud.
Other applications of fast data
I’m aware that Capital One has deployed a fast data service and is now able to authorize a car loan in seconds – instant on-the- line confirmation that massively improves the customer experience.
Yet I’ve also heard of instances where data is anything but fast!
Take the Lloyds Insurance market. Currently, some full risk assessments for specialist insurance are completed two weeks after prices have been quoted – quite clearly, this is a risk too far!
We can also see applications in places like the police and military, who often have to capture and act upon a variety of data sources, in real time, in often hazardous and difficult circumstances. Fast data analytics could be used, for example, to predict when supplies of ammunition will run out and to trigger immediate air drops to front-line troops.
The opportunities to change lives with fast data are enormous. Luckily, it’s becoming easier and easier to achieve. The time to start is now.