Tag Archives: clustre

Growing Import of ‘Edge Computing’

I’ve often thought the most valuable interactions happen with the people at the edge of our networks. The people we meet serendipitously, through our more distant contacts. It’s here, on the edge, where the sparks of creativity really fly.

Recently, I’ve been putting this theory to the test by taking the time to meet face-to-face with people I might more usually only connect with by email or LinkedIn.

It’s a refreshing experience. One of the many benefits is frequent exposure to new ideas and new ways of thinking from people who view the world through a different lens. There are other benefits, such as getting to see the new musical “Bat Out of Hell” in the company of an American lawyer. But I digress …

It’s not just people who benefit from networking at the edge. Computers do, too, and there’s an interesting analogy to be drawn here with the emerging importance of edge computing. This is where processing and data are placed at the edge of our networks to have the maximum effect.

Let me explain.

Over the last decade or so, we’ve seen processing and data increasingly centralized in the cloud. This has been driven partly by the cost-effectiveness and scalability of cloud computing and partly by the growth of big data.

Amazon Alexa is an excellent example of how this works. Voice commands are picked up by an Amazon Echo device, converted from speech to text and fired off to the cloud where natural language processing (NLP) and clever software are used to interpret and fulfill your requests. The results are served back to your Echo in less than half a second. Very little processing takes place on the Echo, and very little data is stored there; all the heavy lifting is done centrally in the cloud.

See also: The Big Lesson From Amazon-Whole Foods  

This model works well if the edge device (the Echo) is always connected to the cloud via the internet, the arrival rate of new data (your voice commands) is relatively low and the response time is not critical (we’re happy to wait half a second for Ed Sheeran to start his next song).

But it doesn’t work so well if the edge device is not always connected, if the volume of real-time data streaming into the device is huge or if an instant response is needed.

Imagine you’re being driven to the theater by your AI-controlled smart car equipped with hundreds of sensors gathering real-time data from every direction. If the sensors detect a child running out in front of the car, there’s no point firing that data off to the cloud for processing. It has to be processed and acted on instantly and locally by the car itself.

There are many, many more examples where the edge devices (cars, traffic lights, fitness bracelets, microwaves, safety critical sensors on assembly lines… in fact, very many of the billions of devices that will be connected to the internet of things over the coming years) will need the ability to process their own real-time data.

These edge computing devices will still connect to the cloud, but the location of the processing and the data will vary according to need — in the cloud for asynchronous machine learning insights and improvements; at the edge for real-time processing of real-time data streams to determine real-time actions.

Developing the hardware and software for these devices will require new ways of thinking. It’s not about big data; it’s about small, fast data. And I’m sure we’re going to see dramatic improvements in battery efficiency, data storage and processing capability of these intelligent edge computing devices.

The Internet of Things is actually going to become the Internet of Small Powerful Intelligent Things (although I doubt that acronym will catch on).

See also: Major Opportunities in Microinsurance  

Most interestingly of all, though, from a cultural and business perspective, is the innovation this edge computing will enable, such as:

  • The insurance industry will be able to offer better deals and new types of policies driven by the intelligence embedded in the insured assets.
  • The health industry will be able to provide preventative care supported by intelligent wearables monitoring everything from activity to blood sugar levels.
  • The entertainment industry will be able to deliver interactive content without those annoying buffers and whirling circles.

And, who knows, maybe edge computing will also help us communicate more effectively with each other. Because spending time at the edge of our networks, as I have been discovering, is where the sparks of creativity really fly. Like the musical “Bat Out of Hell,” it’s one experience I can thoroughly recommend!

Forget Big Data; You Need Fast Data

In 1989, Queen released a very successful single called “I Want It All.” The opening repeats the song title twice, then changes subtly to “and I want it now!” This could be a battle cry for today’s fast-moving society.

We’ve all come to expect a rapid response to our requests for service, and we’ve become impatient with those who can’t deliver. We even watch kettles heat up and wonder why they take so long to boil, and we stand and complain about queue lengths.

Whereas consumers might take some comfort (or the opposite) in knowing that most companies they deal with hold vast amounts of data about them, all of this data is historic and, actually, very little is used productively. Yet we are increasingly engaged in real-time conversations with companies either via a mobile app, our PCs or the good old-fashioned telephone, providing real-time data about a need or a problem. So why aren’t companies, by and large, capturing and acting on that data in real-time while they are interacting with their customers? The simple explanation is that acting on data captured in real time is beyond the means of most of the systems built by these companies, and it’s not a trivial matter to change, given that this inevitably means tinkering with legacy systems.

See also: Producing Data’s Motion Pictures  

But there is a solution in sight, and it’s called “fast data.”

Fast data is the application of big data analytics to smaller data sets in near-real or real time to solve a problem or create business value while engaging with a customer or another computer. Fast data is not a new idea, but it’s going to get very important to embrace fast data.

A Fast Data Architecture

What high-level requirements must a fast data architecture satisfy? They form a triad:

  1. Reliable data ingestion.
  2. Flexible storage and query options.
  3. Sophisticated analytics tools.

The components that meet these requirements must also be reactive, meaning they scale up and down with demand, are resilient against the failures that are inevitable in large distributed systems (we don’t want any failures on autonomous cars!), always respond to service requests even if failures limit the ability to deliver services and are driven by messages or events from the world around them.

The chart below shows an emerging architecture that can meet these requirements.

The good news is that you can graft such an architecture on top of legacy systems, which is exactly what ING has been doing.

Unlocking valuable intelligence

Back in the halcyon days, banks were very close to their customers. They knew customers intimately and treated them personally. With the proliferation of customers, products and channels, though, this intimacy has been lost. ING wanted to recapture the “golden era” with a global strategy to make the bank more customer focused, “mobile first” and altogether more helpful.

A typical bank these days captures and processes billions of customer requests, instructions and transactions. In doing so, they capture and store vast amounts of customer data – but, and here’s the startling truth, few (if any) of the major banks use this data effectively for the benefit of their customers.

ING appointed a manager of fast data, Bas Geerdink, to address this problem. His broad international remit is to create a truly customer-friendly, omni-channel experience. To kick start this process, he turned his attention to ING’s vast but disparate data stores, as he was convinced they could unlock valuable intelligence. Historical data can often reveal customer behaviors and trends that are crucial to predictive analytics. For example, past data can be used to plot future pressure points on personal finances – e.g., key payment events can be anticipated and mitigated with predictive analytics.

However, mining this data presents major challenges. Most banks are hampered by disparate and disconnected legacy applications that cannot operate in real time. Confronted with this dysfunctional problem, ING made some fundamental decisions:

  1. Create a single, secure data lake.
  2. Employ a variety of open source technologies (along the lines of those shown in the chart above). These technologies were used to build the over-arching notifications platform to enable data to be captured and acted on in real time.
  3. Work with the legacy application teams to ensure that critical events (during a customer’s “moment of truth”) are notified to this fast data platform.
  4. Trigger two vital platform responses: a. Instantly contact the customer to establish whether help is urgently needed (for example, to complete a rapid loan application); b. Run predictive analytics to decide whether the customer needs to be alerted.

The future role of banks

Partly in response to the Open Banking directive, the bank is now opening up its data to third parties who have been authorized by customers to process certain transactions on their behalf (e.g. paying bills). This is a fascinating development with potentially far-reaching implications. It raises a question about the future role of banks. For example, would the rise of nimble, tech-driven third parties reduce banks to mere processing utilities?

ING is determined not to be marginalized, which is why it has invested in this fast data platform and its building real-time predictive apps – both on its own and with third parties  (such as Yolt). It is a bold and very radical strategy – and, not surprisingly, it raises some searching questions.

Hearing this story made me wonder what types of customer would most welcome this kind of service, and was there any risk of alienating less technology-literate customers?

The bank doesn’t yet have definitive answers to these questions. However, ING is adamant that all technology-driven initiatives must have universal appeal, and that is why ING is introducing change on a very gradual, phased basis.

See also: When Big Data Can Define Pricing (Part 2)  

In the first instance, ING is testing these services on employees of the bank and then on beta test groups of (external) customers. To date, feedback has been extremely positive, and this has encouraged the bank to keep investing. However, Bas emphasizes the need to appreciate customer sensitivities and preferences. For example, there is a fine line between providing a valued service and becoming intrusive – that is why the bank specifically considers factors such as the best, most receptive time of day to make interventions (if at all).

Fraud detection is another intriguing development where fast data is having a significant impact. At the moment, traditional fraud detection systems often lack finesse. When a customer attempts to use a credit card, it can trigger a false positive 90% of the time (or even more). This can be inconvenient both for the bank and especially for the customer (although a false positive is not always perceived in a negative way – it shows the bank is checking money flows). ING is hopeful that its fast data platform will radically reduce the level of false positives as well as the level of fraud.

Other applications of fast data

I’m aware that Capital One has deployed a fast data service and is now able to authorize a car loan in seconds – instant on-the- line confirmation that massively improves the customer experience.

Yet I’ve also heard of instances where data is anything but fast!

Take the Lloyds Insurance market. Currently, some full risk assessments for specialist insurance are completed two weeks after prices have been quoted – quite clearly, this is a risk too far!

We can also see applications in places like the police and military, who often have to capture and act upon a variety of data sources, in real time, in often hazardous and difficult circumstances. Fast data analytics could be used, for example, to predict when supplies of ammunition will run out and to trigger immediate air drops to front-line troops.

The opportunities to change lives with fast data are enormous. Luckily, it’s becoming easier and easier to achieve. The time to start is now.

What Is the Right Innovation Process?

I need an innovation process like I need a…

I was at an event the other day when two things really spoiled my day. The first twinge came in the plenary session. Our host announced that we were about to be shown some videos of front-line staff describing their work frustrations and, in particular, the obstacles that prevented them from doing a better job.

Now that sounded like a great initiative, so my curiosity was aroused.

The camera focused on a broad group of internal stakeholders, and, very soon, a number of consistent themes and issues began to emerge. My attention was now well and truly grabbed. I waited with interest – and high expectations – for the host to explain how these concerns will be addressed and resolved. And I waited… and I waited… and slowly it dawned on me that I was going to be seriously short-changed.

“These videos illustrate how important it is to listen to your front-line staff,” he explained – completely ignoring the fundamental truth that front-line feedback is only valuable when it is turned into actionable, positive outcomes. We had witnessed loyal staff displaying real courage and passion in their filmed interviews – but the all-important reactions and resolutions, together with the two-way communication from an appreciative management, were never even mentioned.

Then something else happened that stunned me – and many of my fellow delegates.

See also: How to Create a Culture of Innovation  

I attended a breakout session on the topic of innovation. One of the key speakers was an independent consultant who specializes in helping major enterprises establish a culture of innovation. I was happily nodding along to his presentation when he dropped this bombshell: “First and foremost, you must have a common approach to innovation. Control is critical. You can’t permit people to innovate in completely different directions.”

There was a metaphorical thud as several jaws hit the carpeted floor. By definition, innovation is free-thinking and radical. It shatters the mold by breaking conventions and rules. Thought control is anathema to innovation… or is it?

That got me thinking…

I have spent most of my life working on highly innovative projects. And I have to concede that most of these projects needed a process. However, this process is certainly not designed to inhibit freedom of thought. It is simply there to focus minds on real-world issues and solutions. Innovation projects have to be anchored in genuine need. If you lose sight of this reality, you will end up with a bunch of brilliant, tangential ideas that have absolutely zero commercial merit… and that is the quickest way to lose management support and funding for innovation.

So how do you create a process that focuses innovation without inhibiting it? Well, I hope my diagram will help to concentrate minds…

Every innovation effort should start with – and be driven by – the Voice of the (Internal and External) Customers. These key stakeholders can tell you what you need to fix. With a little encouragement, they will also give you rapid feedback on ideas – objective guidance that can you stop you wandering up costly and unproductive blind alleys. And one of the most instant and engaging ways to capture these stakeholder insights is via mobile video. One of our Clustre innovation firms has perfected software for capturing these sentiments. It is a remarkable tool that offers deep analytics and sentiment assessment to surface really valuable insights. Used on a regular basis, it is a unique asset to any innovation project.

To kick-start the innovation process, you need to understand the frustrations that customers experience in their dealings with you – and, just as importantly, with your competitors. The same goes for internal customers and stakeholders. Their opinions will reveal the honest (often sobering) truth about the health of your culture and organization.

Uber’s success was born out such research. Consumer frustrations with finding reliable private transport and the inconvenience of having to pay in cash for minicabs shaped the whole Uber service concept. It has transformed the global minicab industry by understanding consumer frustrations, addressing real areas of need and resolving them with a bespoke, infinitely scalable service. The message is clear: focus on addressing real Needs and Frustrations and you will deliver relevant, highly commercial solutions.

The trick is to:

First: Capture and focus on the things that frustrate the majority of people.

Second: Concentrate on issues that have the biggest impact on your business and on satisfaction levels.

Third: Avoid being side-tracked by the vocal minority who shout and complain the loudest.

One of the firms we represent has recently invented a tool that automatically seeks out those core areas of frustration. It reveals to companies not only where consumers are frustrated but also the nature, size and extent of that frustration. This tool has now been used on multiple occasions to help consumer companies conceive entirely new products. Indeed, some of these products have rapidly morphed into the most profitable lines for these companies… the ultimate bottom-line justification for innovation.

See also: Innovator’s Edge enhanced to make direct insurance innovation connections

If the starting point for innovation is to understand needs and frustrations, then the next step in the process is to Conceive Solutions. Some people think of this as a scientific process – a technology-fueled trip into the outer realms of possibility. I disagree. Essentially, innovation comes from inspired team-building – blending talents and personalities to create the right human chemistry.

Here are a few guidelines:

Balance. From my experience, you need to strike a balance between radical, out-of-the-box thinkers and down-to-earth, practical engineers. Tether blue-sky thinking to grounded reality.

Objectivity. Again, experience has proved the value of involving the people who will ultimately be the customers for your solutions.

Cross-pollination. Look outside your specific company and industry for inspired solutions in parallel universes.

But, of course, the true merit of any pie is in the eating. Most people struggle with concepts – they can only judge an idea when they see the reality of a product. That’s why I always urge clients to move swiftly from conception to creation:

  • Prototyping. This is probably not the forum for diving deep into the prototype process. However, I would like to share the fascinating experience and advice of a close client – the global head of innovation for a global loyalty card operator…
  • 30-day limit. His hard and fast rule is to set aside a maximum of 30 days to build a prototype – and not a day longer. It’s a rigid discipline that contains costs and delivers rapid results.
  • Outsource. He also tends to outsource the development of the prototype (frequently to firms recommended by Clustre). This allows him to reduce his own headcount, capitalize on the best external services and run several projects at the same time.
  • £10k budget. He also sets a hard and fast budget of £10,000 for each prototype. Now, to many people, this will seem a laughable – perhaps even derisory – budget. However, my client always has the last laugh because he has unfailingly delivered a fully functioning (albeit with a limited scope) prototype for this price.

Once you have developed a prototype, it should be shown to the customer representatives and major stakeholders for their verdict. This is the acid test. The Drop or Build decision will be taken there and then. And we can assume that a fair proportion will be buried at this point. However, don’t despair – this is an essential part of the “fail fast, learn quickly” culture that breeds innovation success.

If a decision is taken to develop the prototype further, then this should be done in a very measured but agile way. Using a multi-functional team, the goal must be to create a MVP quickly and cheaply for business viability assessment.

Tools similar to the one used to surface frustrations can then be used to gauge the level of excitement generated by a new product. This can be invaluable – accurately measuring consumer appeal before any serious commitment is made to invest further in the minimally viable product (MVP). What’s more, there are some really innovative marketing techniques that can measure, with forensic accuracy, this consumer interest.

See also: 4 Hot Spots for Innovation in Insurance  

So that’s it – my process for innovation. Offer people unlimited freedom to question precepts, push boundaries and break rules. However, within that free-thinking environment, there has to be a healthy mix of blue-sky thinkers and grounded engineers… sensible and very essential budgetary controls… an immutable date for prototype delivery… and, most importantly, a very sharp focus on real-world customer frustrations. Those are the essential tenets for innovation success.

Forget ‘Intel Inside’; It’s Now AI Inside

I am sure we are all familiar with the Intel slogan, “Intel inside.” This has been a very powerful tagline and one that has helped Intel become the dominant PC chip supplier. (I know I was very influenced by the slogan and very rarely bought a non-Intel PC as a consequence.)

But I believe that this slogan will be rapidly replaced by “AI inside” because I believe we are almost at the point when ALL future apps will include elements of AI. I also believe there is a very good chance that Amazon’s Alexa might become the de facto automatic speech recognition platform that will sit in front of (outside) every single app in the future.  (My rationale is here.)

Why do I say this?

First, you need to recognize that AI is not one singular, all-embracing technology. Rather, it is a set of technologies that hope to emulate the way a human interprets and acts upon information — albeit at the speed of light and (we hope) without error, on a 24/7 basis. As such, AI includes technologies such as natural language (voice) processing (NLP), semantic analysis and cognitive processing.

Second, these technologies have become pervasive. The CEO of IBM recently announced at Davos that Watson (a supercomputer and a collection of AI APIs) is now having a (positive) impact on the lives of some billion people (about 1/7th of the world’s  population). I don’t know how many Echo and Dot units have been sold by Amazon (it must be tens of millions, at least) but each unit gives you access to Alexa, which uses both voice recognition and processing.

See also: 10 Questions That Reveal AI’s Limits  

Third — and most important — you don’t need to have a degree in AI (any more) to deploy AI.

AI was notionally conceived by Alan Turing in 1936 (but, in one sense, you can trace the origins of AI all the way back to Archimedes!). I was taught elements of AI at university in the early 1970s, but I didn’t have a chance to develop an AI app until the mid-1990s when I was a consultant for the Nationwide Building Society. We had just finished a ground-breaking piece of work that involved the development and deployment of the world’s first touch-screen-driven, customer self-service system. This system was a huge success on all measures, so the client I was working for at the time asked me:

“How far can you take this idea? Could you, for example, develop a system that’s as good as — or even better than — our best sales person?”

Without knowing it at the time, he was asking me to develop our first AI system. Fortunately for me (because I am certainly not an AI expert), Accenture had just hired an authority on the subject. He was swiftly assigned to my project team, and we stepped once more into the unknown world of innovation.

We started by gathering a team of the client’s top sales people. We then sat them down with our AI expert, who had been carefully briefed on the rules governing the sale of regulated products. We also called in the services of a user experience designer to obtain a better understanding of people’s risk appetites and option requirements. Last, but certainly not least, we asked a group of customers to help us develop the system that would be designed for their stand-alone use.

The result blew everyone away.

It even won the support of the U.K. Financial Services Authority (FSA), which agreed to assess the system for compliance. The FSA tested and analyzed every aspect of the new application — and then signed off. It was the first time the FSA had ever approved a sales platform that removed the need for a sales person.

Remember, this happened in the early ’90s — long before Java, Windows 95 and the first PlayStation were launched. Our system is a tribute to a client who not only had the vision to see the possibilities but also had the courage to take on the challenge — as well as the very real risk of failure.

However, there is a sad but rather revealing postscript to this story.

What happened to this ground-breaking system? Well, it was lauded, feted and widely acclaimed — and then quietly shelved. The building society decided to focus on building its Systems of Record (SoR) rather than its Systems of Engagement (SoE). And, sad to say, that was not an uncommon fate back then. Real innovation is often too radical for most risk-averse management to stomach. Sometimes it takes time to build an appetite for the truly ground-breaking. And maybe — just maybe — 20 years later, that time has come.

There was another problem: I only had one AI programmer at my disposal, and there weren’t that many more in the U.K. at the time. Given this, it would have taken a considerable amount of time to build an industrial-strength application that could have been put into the hands of any customer. But now we don’t have that problem.

One of the firms we at Clustre represent is an AI consultancy that is AI-technology-agnostic. It conceives, designs and builds AI-driven customer and employee apps that use a variety of AI technologies — as appropriate. It was recently asked by a loyalty card operator to show how AI could be used to allow a card holder to get an answer to a query without talking to a human or having to scour through FAQs (which I think are generally pretty useless). The firm created a web-based chat bot that used Watson to help recognize and understand the question and used another product to drive the Q&A process and, ultimately, answer the question.

So clever is the bot that it can easily handle misspellings and allow the questions to be phrased in a variety of ways and still operate properly. I would hazard a guess that this tool could handle at least 50% of all customer queries (the rest would be handed off to a human to resolve). That’s a lot fewer calls that need to be routed through to a human.

See also: Why 2017 Is the Year of the Bot  

So, you may ask, how many days did it take our AI consultancy to design and build this AI-driven chat bot? Just five.

Five days to design and build a tool that could potentially reduce call center volumes by around 50%!!!

AI has truly arrived, and everyone should be looking at how you are going to deploy it NOW!

Innovation Happens at the Edge

I’m currently working with the IT services team of a large government department. The goal is to help them become more innovative and agile so they can deliver faster, greater value to their internal and external customers.

At Clustre, we fundamentally believe that innovation has to be focused on solving real issues.  Without that compass, innovation is a pointless vanity. Only by recognizing and understanding a problem can you begin to solve issues in a clever way. Innovation has to be driven from the edge – by front line staff actually immersing themselves in real problems – rather than in a remote innovation lab dealing in theoretical issues. This may sound painfully obvious but it’s a basic precept that is often forgotten in the headlong rush to embrace new thinking.

My client agrees. This project was firmly rooted in a very real problem – here’s the nub of it…

This government department serves an agency that holds frequent strategy meetings of significant national importance. Up to twenty senior people attend these events and, historically, they would each bring a note-taker to record every key discussion point for the benefit of absent colleagues.

See also: How to Master the ABCs of Innovation  

It’s a system that worked effectively – if somewhat cost-inefficiently – until austerity became an absolute government priority. Suddenly, this surfeit of scribes could no longer be afforded. Senior officials were left to take, summarize and circulate their own notes. It was too much of a multi-task. Notes became fragmentary and reporting became ever more sporadic. Clearly, the process was collapsing. So Clustre was asked to suggest ways of bridging the widening communication gap.

Fortunately, my client had done a pretty thorough job of analysing and understanding the problem. Embedding a member of his Innovation team within the meeting group, he quickly identified the core issues. Undoubtedly – and perhaps understandably – the imposition of this rather menial ‘note-taking’ role aroused some deep resentment. But that was secondary to the main stumbling blocks: acute time starvation and an irreconcilable conflict of roles. To be both a thought-leading contributor to strategic meetings and a shorthand reporter took role-play too far. It simply wouldn’t work. But that left us with an even bigger question: what would?

Serendipity is defined as ‘fortunate happenstance’ – a surprise collision of possibilities. Literally a few weeks earlier, we had welcomed a fascinating new company to our Clustre innovation community. This team has developed a technology that enables people to video capture and share meeting discussions and outcomes. What’s more, a powerful search agent also allows people to search for and instantly access pre-recorded material. It was our serendipity moment.

I arranged to take the technology provider to meet the client and, to cut a long and very animated presentation short, he loved it. He instantly saw the potential applications and is now arranging to demo this clever piece of technology to his clients. I have high hopes.

However, win or lose, this story is an object lesson. Increasingly we find that innovation is happening at the edge. Really clever thinking comes from reaching out to connect with real customers and resolve very specific human needs. In my experience, centralized innovation labs are often isolated from this reality.

To close, let me leave you with this last thought…

At some considerable expense, a government agency recently issued all front line staff with smart phones. Part of a bold initiative to promote the adoption of technology, these dedicated devices came fully loaded with some very secure apps.

The agency’s end goal was to persuade people to use these devices exclusively. But many staff refused to play ball. They flipped between work and private devices until some simple behavioral research revealed the core issue…

See also: 2017 Priorities for Innovation, Automation  

When the agency gave permission for staff to upload their personal music, the problem was instantly eliminated. It just goes to prove that intelligence and lateral thinking will deliver solutions that money alone can’t buy.