Tag Archives: Laughlin

A New Framework for Your Analysts

As we focus on Analytics & Data Science, I’ve been reminded of how a Competency Framework can help.

Both work with clients, and my own experience in creating and leading analytics teams has taught me that such a tool can help in a number of ways. In this post I’ll explain what I mean by a competency framework and the different ways it can help analytics, data science or customer insight leaders.

I wonder if you’ve used such a tool in the past?

Across generalist roles and larger departments, the use of competencies has become the norm for many years, as HR professionals will attest. However, sometimes these definitions and descriptions feel to generic to be helpful to those leading more specialist of technical teams.

But, before I get into overcoming that limitation, let me be clear on definitions.

A dictionary definition of competency explains it as “the ability to do something successfully or efficiently”. In practice, in business, this usually means the identification of a combination of learnt skills (or sometimes aptitude) & knowledge that evidence that someone has that ability (normally to do elements of their job successfully). HR leaders have valued the ability for these to be separated from experience in a particular role, thus enabling transferable competencies to be identified (i.e. spotting an individual who could succeed at a quite different role).

Defining a competency framework

Building on this idea of competencies as building blocks, of the abilities needed to succeed in a role, comes the use of the term ‘competency framework’.

The often useful, MindTools site, defines a competency framework as:

A competency framework defines the knowledge, skills, and attributes needed for people within an organisation. Each individual role will have its own set of competencies needed to perform the job effectively. To develop this framework, you need to have an in-depth understanding of the roles within your business.

Given many Analytics leaders have come ‘up through the ranks’ of analyst roles, or are still designing & growing their functions, most have such an in-depth understanding of the roles within their teams. Perhaps because HR departments are keen to benefit from the efficiencies of standardised competencies across a large business, there appears to have been less work done on defining bespoke competencies for analytics teams.

See also: The Challenges of ‘Data Wrangling’  

Having done just that, both as a leader within a FTSE 100 bank and for clients of Laughlin Consultancy, I want to share what a missed opportunity this is. A competency framework designed to capture the diversity of competencies needed within Analytics teams has several benefits as we will come onto later. It also helps clarify the complexity of such breadth, as we touched upon for Data Science teams in an earlier post.

The contents of an Analytics competency framework

Different leaders will create different flavours of competency framework, depending on their emphasis & how they articulate different needs. However, those I have compared share more in common than divides them. So, in this section, I will share elements of the competency framework developed by Laughlin Consultancy to help our clients. Hopefully that usefully translates to your situation.

First, the structure of such a framework is normally a table. Often the columns represent different levels of maturity for each competency. For example, our columns include these levels of competency:

  • None (no evidence of such a competency, or never tried)
  • Basic (the level expected of a novice, e.g. graduate recruited to junior role)
  • Developing (improving in this competency, making progress from basic ‘up the learning curve’)
  • Advanced (reached a sufficient competency to be able to achieve all that is currently needed)
  • Mastery (recognized as an expert in this competency, or ‘what good looks like’ & able to teach others)

Your maturity levels of ratings for each competency may differ, but most settle for a 5 point scale from none to expert.

Second, the rows of such a table identify the different competencies needed for a particular role, team or business. For our purposes, I will focus on the competencies identified within an Analytics team. Here again, language may vary, but the competency framework we use at Laughlin Consultancy identifies the need for the following broad competencies:

  • Data Manipulation (including competencies for coding skills, ETL, data quality management, metadata knowledge & data project)
  • Analytics (including competencies for Exploratory Data Analysis, descriptive, behavioural, predictive analytics & other statistics)
  • Consultancy (including competencies for Presentation, Data Visualization, Storytelling, Stakeholder Management, influence & action)
  • Customer-Focus (including competencies for customer immersion, domain knowledge (past insights), engagement with needs)
  • Risk-Focus (including competencies for data protection, industry regulation, GDPR, operational risk management)
  • Commercial-Focus (including competencies for market insights, profit levers, financial performance, business strategy & SWOT)
  • Applications (including competencies for strategy, CX, insight generation, proposition development, comms testing, marketing ROI)

Variations on those are needed for Data Science teams, Customer Insight teams & the different roles required by different organisational contexts. Additional technical (including research) skills competencies may need to be included. However, many are broadly similar and we find it helpful to draw upon a resource of common ‘holistic customer insight’ competencies to populate whichever framework is required.

If all that sounds very subjective, it is. However, more rigour can be brought to the process by the tool you use to assess individuals or roles against that table of possible scores for each competency. We find it helpful to deploy two tools to help with this process. The first is a questionnaire that can be completed by individuals and other stakeholders (esp. their line manager). By answering each question, that spreadsheet generates a score against each competency (based on our experience across multiple teams).

Another useful tool, especially for organizations new to this process, can be for an experience professional to conduct a combination of stakeholder interviews and review of current outputs. Laughlin Consultancy has conducted such consultancy work for a number of large organizations & it almost always reveals ‘blindspots’ as to apparent competencies or gaps that leaders may have missed.

However you design your scoring method, your goal should be a competency framework table & consistent audible scoring process. So, finally, let us turn to why you would bother. What are some of the benefits of developing such a tool?

Benefit 1: Assessing individual analysts’ performance

All managers learnt that there is no perfect performance management system. Most are, as Marshall Goldsmith once described them, stuff you have to put up with. However, within the subjectivity & bureaucracy that can surround such a process, it can really help both an analyst & their line manager to have a consistent tool to use to assess & track their development.

I have found a competency framework can help in two ways during ongoing management & development of analysts:

  • Periodic (at least once a year) joint scoring of each analyst against the whole competency framework, followed by a discussion about different perspectives and where they want to improve. In this process remember also the greater benefit of playing to strengths rather than mainly focussing on weaknesses.
  • Tracking of development progress and impact of L&D interventions. After agreeing on priorities to focus on for personal development (and maybe training courses), an agreed competency framework provides a way of both having clearer learning goals & tracking benefits (did competency improve afterwards).

Benefit 2: Designing roles and career paths

Analytics & Data Science leaders are largely agreed that a mix of complementary roles are needed to achieve effective teams. However, it can be challenging to be clear, when communicating with your teams & sponsors, how these roles both differ & work together.

Here again a consistent competency framework can help. Scoring each role against the competency maturities needed, can enable a manager to see how whole team scores or any gaps still left. It can also help in more objectively assessing candidates suitability for different roles within a team (e.g. are they stronger at competencies for ‘back office’ modeller or ‘front of house’ consultant type roles).

See also: Insurtech: How to Keep Insurance Relevant  

If that benefit provides more consistency when considering peer-level opportunities, this tool can also help guide promotion opportunities. It can help you define the different competency maturities needed, for example, by junior analyst verses analyst verses senior analyst verses analytics manager. Such clarity enables more transparent conversations between analysts & their managers (especially when one can compare & contrast an individuals competency score with those needed by different roles).

Seeing how those competency profiles compare at different levels of seniority for different technical roles, can also enable a manager to see options for career development. That is, there are often options for junior members of the team (rather than a simple climb up the functional ‘greasy pole’). Examples might be: development of statistical skills to pursue a career path in the modelling roles; development of data manipulation skills to pursue a career path towards Data Engineer; development of questioning & presentation skills to aim for a business partner role, etc.

Benefit 3: Identifying your team L&D priorities and where to invest

Used together, all the elements mentioned above, can help an Analytics leader identify where the greatest development needs lie (both in terms of severity of gap & number of people impacted).

Comparing the competency profiles for roles needed in team, with current capabilities of role holders, can identify common gaps. Sometimes it is worth investing in those most common gaps (for sufficient numbers, it’s still worth considering external training).

Then you can also compare the potential career paths & potential for development that managers have identified from conversations. Are there competency gaps that are more important because they help move key individuals into being ready for new roles & thus expand the capability or maturity of overall team?

Much of this will be subjective, because we are talking about human beings. But having a common language, through the competency framework tool, can help leaders better understand & compare what they need to consider.

Do you use an Analytics Competency framework?

If you are an Analytics or Data Science or Customer Insight leader, do you currently use a competency framework? Have you seen how it can help you better understand the capabilities of individuals, requirements of roles & how both best fit together in an effective team?

Do you have the means to have meaningful career path conversations with your analysts? Being able to do so can be key to improving your analyst retention, satisfaction & engagement with your business.

I’m sure there is a lot more wisdom on this topic from other leaders out there. So, please share what you have found helpful.

Are You Reinventing Wheel on Analytics?

Once your analysts have a clear business question to answer, do they start new analysis each time, potentially reinventing the wheel?

After creating or leading data and analytics teams for many years, I began to notice this pattern of behavior. What we seemed to lack was a consistent knowledge management solution or corporate memory that could easily spot what should be remembered.

Funnily enough, as I became convinced of the need for holistic customer insight, I found a partial answer among researchers.

Avoiding reinvention is such an important issue for analytics and insight teams that I’ll use this post to share my own experience.

The lack of secondary research approach for analytics

Researchers do a somewhat better job than insight teams because of their understanding of the need for secondary research. Experienced research analysts/managers will be familiar with considering the potential for desk research, or searching through past research, to answer the question posed. Perhaps because of the more obvious cost of commissioning new primary research (often via paying an agency), researchers make more effort to first consider if they already have access to information to answer this new question.

But, even here, there does not appear to be any ideal or market-leading knowledge management solution. Most of the teams I have worked with use an in-house development in Excel, interactive PowerPoint slides with hyperlinks to file structures or intranet-based research libraries. Whichever end-user computing or groupware solution is used, it more or less equates to an easier to navigate/search library of all past research. Normally, a user can search by keywords or tags, as well as through a prescribed structure of research for specific products/channels/segments etc.

See also: Why Customer Experience Is Key  

Some research teams use this very effectively and also recall those visualizations/graphics/VoxPops that worked well at conveying key insights about customers. It is worth investing in these area as it can save a significant amount of research budget to remember and reuse what has been learned already.

However, while also leading data or analytics teams (increasingly within one insight department), it became obvious that such an approach did not exist for analytics. At best, analysts used code libraries or templates to make coding quicker/standardized and to present results with a consistent professional look. Methodologies certainly existed for analysis at a high-level or for specific technical tasks like building predictive models, but there was no consistent approach to recording what had been learned from past analysis.

I’ve seen similar problems at a number of my clients. Why is this? Perhaps a combination of less visible additional costs (as analysts are employed already) and the tendency of many analysts to prefer to crack on with the technical work together conspire to undermine any practice of secondary analytics.

The many potential benefits of customer insight knowledge management

Once you focus on this problem, it becomes obvious that there are many potential benefits to improving your practice in this area.

Many analytics or BI leaders will be able to tell you their own horror stories of trying to implement self-serve analytics. These war stories are normally a combination of the classic problems/delays with data and IT projects, plus an unwillingness from business stakeholders to actually interrogate the new system themselves. All too often, after the initial enthusiasm for shiny new technology, business leaders prefer to ask an analyst than produce the report they need themselves.

So, one potential advantage of a well-managed and easily navigable secondary analytics store is a chance for business users to easily find past answers to the same question or better understand the context.

But the items stored in such an ideal knowledge management solution can be wider than just final outputs (often in the form of PowerPoint presentations or single dashboards).

I have seen teams benefit from developing solutions to store and share across the team:

  • Stakeholder maps and contact details
  • Project histories and documentation
  • Past code (from SQL scripts to R/Python packages or code snippets)
  • Metadata (we’ve shared more about the importance of that previously; here I mean what’s been learned about data items during an analysis)
  • Past data visualisations or graphics that have proved effective (sometimes converted into templates
  • Past results and recommendations for additional analysis or future tracking
  • Interim data, to be used to revisit or test hypotheses (suitably anonymized)
  • Output presentations (both short, executive style and long full documentation versions)
  • Recommendations for future action (to track acting on insights, as recommended previously)
  • Key insights, summarized into a few short sentences, to accumulate key insights for a specific segment, channel or product

Given this diversity and the range of different workflows of methodologies used by analysts, it is perhaps not surprising that the technical solutions tried vary as well.

Where is the technology analytics teams need for this remembering?

As well as being surprised that analytics teams lack the culture of secondary analytics, compared with the established practice of secondary research, I’m also surprised by a technology gap. What I mean is the lack of any one ideal, killer-app-type technology solution to this need from insight teams.

Although I have led and guided teams in implementing different workarounds, I’ve yet to see a complete solution that meets all requirements.

See also: Why to Refocus on Data and Analytics  

An insight, data or analytics leader looking to focus on this improvement should consider a few requirements. First off, the solution needs to cater with storing information in a wide variety of formats (from programming code to PowerPoint decks, customer videos to structured data sets, as well as the need to recognize project or “job bag” structures). Next, it has to be quick and easy to store these kinds of outputs in a way that can later be retrieved. Any solution that requires detailed indexing, accurate filing in the right sub-folder or extensive tagging just won’t get used in practice (at least not maintained). Finally, it also has to be quick and easy to access everything relevant from only partial information/memories.

Imperfect solutions that I have seen perform some parts of this well are:

  • Bespoke Excel or PowerPoint front-ends with hyperlinks to simple folder structures
  • Evernote app, with use of tags and notebooks
  • SharePoint/OneNote and other intranet-based solutions for saving Office documents
  • Databases/data lakes capable of storing unstructured or structured data in a range of file formats
  • Google search algorithms used to perform natural language searches on databases or folders

These can all fulfill part of the potential, but the ideal should surely be a simple as asking Alexa or Siri and having all completed work automatically tagged and stored appropriately. I’m sure it’s not behind the capabilities of some of the data and machine learning technologies available today to deliver such a solution. I encourage analytics vendors to focus more on this knowledge management space and less on just new coding and visualisations.

Do you see this need? How do you avoid reinventing the wheel?

I hope this petition has resonated with you. Do you see this need in your team?

Please let us know if you’ve come across an ideal solution. Even if it is far from perfect, it would be great to know what you are using.

Share your experience in comments boxes below, and I may design a short survey to find out how widely different approaches are used.

Until then, all the best with your insight work and remembering what you know already.

The Challenges of ‘Data Wrangling’

A couple of conversations with data leaders have reminded me of the data wrangling challenges that a number of you are still facing.

Despite the amount of media coverage for deep learning and other more advanced techniques, most data science teams are still struggling with more basic data problems.

Even well-established analytics teams can still lack the single customer view, easily accessible data lake or analytical playpen that they need for their work.

Insight leaders also regularly express frustration that they and their teams are still bogged down in data fire fighting’, rather than getting to analytical work that could be transformative.

Part of the problem may be lack of focus. Data and data management are often still considered the least sexy part of customer insight or data science. All too often, leaders lack clear data plans, models or strategy to develop the data ecosystem (including infrastructure) that will enable all other work by the team.

Back in 2015, we conducted a poll of leaders, asking about use of data models and metadata. Shockingly, none of those surveyed had conceptual data models in place, and half also lacked logical data models. Exacerbating this lack of a clear, technology-independent understanding of your data, all leaders surveyed cited a lack of effective metadata. Without these tools in place, data management is in danger of considerable rework and feeling like a DIY, best-endeavors frustration.

See also: Next Step: Merging Big Data and AI  

So, what are the common data problems I hear, when meeting data leaders across the country? Here is one that crops up most often:

Too much time taken up on data prep

I was reminded of this often-cited challenge by a post on LinkedIn from Martin Squires, experienced leader of Boot’s insight team. Sharing a post originally published in Forbes magazine, Martin reflected how little has changed in 20 years. This survey shows that, just as Martin and I found 20 years ago, more than 60% of data scientists’ time is taken up with cleaning and organizing data

The problem might now have new names, like data wrangling or data munging, but the problem remains the same. From my own experience of leading teams, this problem will not be resolved by just waiting for the next generation of tools. Instead, insight leaders need to face the problem and resolve such a waste of highly skilled analyst time.

Here are some common reasons that the problem has proved intractable:

  • Underinvestment in technology whose benefit is not seen outside of analytics teams (data lakes/ETL software)
  • Lack of transparency to internal customers as to amount of time taken up in data prep (inadequate briefing process)
  • Lack of consequences for IT or internal customers if situation is allowed to continue (share the pain)

On that last point, I want to reiterate advice given to coaching clients. Ask yourself honestly, are you your own worst enemy by keeping the show on the road despite these data barriers? Have you ever considered letting a piece of work or regular job fail, to highlight technology problems that your team are currently masking by manual workarounds? It’s worth considering as a tactic.

Beyond that more radical approach, what can data leaders do to overcome these problems and achieve delivery of successful data projects to reduce the data wrangling workload? Here are three tips that I hope help set you on the right path.

Create a playpen to enable play to prioritize data needed

Here, once again, language can confuse or divide. Whether one talks about data lakes or, less impressively, playpens or sandpits within  a server or data warehouse — common benefits can be realized.

More than a decade working across IT roles, followed by leading data projects from the business side, taught me that one of the biggest causes of delay and mistakes was data mapping work. The arduous task of accurately mapping all the data required by a business, from source systems  through any required ETL (extract transform and load) layers, on to the analytics database solution is fraught with problems.

All too often this is the biggest cost and cause of delays or rework for data projects. Frustratingly, for those who do audit usage afterward, one can find that not all the data loaded is actually used. So, after frustration for both IT and insight teams, only a subset of the data really added value.

This is where a free-format data lake or playpen can really add value. They should be used to enable IT to dump data there with minimal effort, or for insight teams to access potential data sources for one-off extracts to the playpen. Here, analysts or data scientists can have opportunity to play with the data. However, this capability is far more valuable than that sounds. Better language is perhaps “data lab’.” Here, the business experts have the opportunity to try use of different potential data feeds and variables within them and to learn which are actually useful/predictive/used for analysis or modeling that will add value.

The great benefit of this approach is to enable a lower cost and more flexible way of de-scoping the data variables and data feeds actually required in live systems. Reducing those can radically increase the speed of delivery for new data warehouses or releases of changes/upgrades.

Recruit and develop data specialist roles outside of IT

The approach proposed above, together with innumerable change projects across today’s businesses, need to be informed by someone who knows what each data item means. That may sound obvious, but too few businesses have clear knowledge management or career development strategies to meet that need.

Decades ago, small IT teams contained long serving experts who had built all the systems used and were actively involved with fixing any data issues that arose. If they were also sufficiently knowledgeable about the business and how each data item was used by different teams, they could potentially provide the data expertise I propose. However, those days have long gone.

Most corporate IT teams are now closer to the proverbial baked bean factory. They may have the experience and skills needed to deliver the data infrastructure. But they lack any depth of understanding of the data items (or blood) that flows through those arteries. If the data needs of analysts or data scientists are to be met, they need to be able to talk with experts in data models, data quality and metadata, to discuss what analysts are seeking to understand or model in the real world of a customer and translate that into the most accurate and accessible proxy within data variables available.

So, I recommend insight leaders seriously consider the benefit of in-house data management teams, with real specialization in understanding data and curating it to meet team needs. We’ve previously posted some hints for getting the best out of these teams.

Grow incrementally, delivering value each time, to justify investment

I’m sure all change leaders and most insight leaders have heard the advice on how to eat an elephant or deliver major change. That rubric, to deliver one bite at a time, is as true as ever.

Although it can help for an insight leader to take time out, step back and consider all the data needs/gaps – leaders also need to be pragmatic about the best approach to deliver on those needs. Using the data lake approach and data specialists mentioned above, time should be taken to prioritize data requirements.

See also: Why to Refocus on Data and Analytics  

Investigating data requirements to be able to score each against both potential business value and ease of implementation (classic Boston Consulting grid style), can help with scoping decisions. But I’d also counsel against just selecting randomly the most promising and easiest to access variables.

Instead, think in terms of use cases. Most successful insight teams have grown incrementally, by proving the value they can add to a business one application at a time. So, dimensions like the different urgency + importance of business problems come into play, as well.

For your first iteration of a project to invest in extra data, then prove value to business to secure budget for next wave – look for the following characteristics:

  • Analysis using data lake/playpen has shown potential
  • Relatively easy to access data and not too many variables (in the quick win category for IT team)
  • Important business problem that is widely seen as a current priority to fix (with rapid impact able to be measured)
  • Good stakeholder relationship with business leader in application area (current or potential advocate)

How is your data wrangling going?

Do your analysts spend too much time hunting down the right data and then corralling it into the form needed for required analysis? Have you overcome the time burned by data prep? If so, what has worked for you and your team?

We would love to hear of leadership approaches/decisions, software or processes that you have found helpful. Why not share them here, so other insight leaders can also improve practice in this area?

Let’s not wait another 20 years to stop the data wrangling drain. There is too much potentially valuable insight or data science work to be done.

What Blockchain Means (Part 2)

Our first post covered the morning sessions on blockchain at the #CityChain17 event organized by MBN Solutions and held at IBM’s spacious SouthBank offices. Our next speakers focused more on applying the technology in your business.

So, here are some more reflections from listening to those speakers, together with blockchain resources that I hope you’ll find useful.

How to get from concept to implementation

First up was Peter Bidewell (CMO of Applied Blockchain). Complementing the earlier technology detail, he unashamedly emphasized engaging the wider business, especially senior leaders (a popular topic for this blog).

He emphasized that his firm was finding real business uses for the technology and that it specialized in the “smart contracts” capability of blockchain.

The benefits of blockchain that he is seeing as more relevant for business clients are:

  • Tamper-proof actions/events
  • Peer-to-peer (avoiding cost of intermediaries)
  • Innately secure (built-in encryption and consensus)
  • Pre-reconciled data (automatically synchronized)
  • Smart contracts

But to apply this technology in business he has found the company needed to develop a number of other augmentations/supporting capabilities. This includes a blockchain “mantle” with:

  • Platform-agnostic implementation of blockchain
  • Data-privacy “capsule” used within the chain
  • Identity management service
  • System performance improvements

See also: What Blockchain Means for Insurance  

In addition to that “enhanced blockchain” capability, real world business applications have required a “full stack” of technologies:

  1. Blockchain (of choice)
  2. Mantle (the above enhancements)
  3. Integration with other key business systems
  4. Front-end (user experience, or UX)

Bidewell explained that a smart contract has nothing to do with replacing lawyers. Rather, it is a container of data and code (a block that can be placed on the chain/shared-ledger/network. It can contain:

  1. Data
  2. Permissions
  3. Workflow logic
  4. Token (if simulating passing of funds)

He finished by sharing some interesting applications. His company is working with Bank of America. Appii, Nuggets and SITA.

The first of those is perhaps the most relevant for readers. BABB is to be the first blockchain-based bank, “an app store for banking.”

The Appii pilot is also interesting, as it enables a sort of verified LinkedIn or CV (with qualifications/experience validated by providers). But the example that sticks in the memory best is real-time drone regulation for SITA; the world’s first blockchain-based registry of what all drones are:

What’s the path to mainstream adoption?

Acknowledging the emerging reality at this event (that commercial blockchain case studies are still in pilot stage), our next speaker shared his experience and thoughts on making greater progress.

Brian McNulty is a founder of the R3 Consortium (mentioned in part one). This is the world’s largest blockchain alliance, with more than 70 major financial services firms and more than 200 software firms and regulators already members.

R3 – Consortium Approach from R3 on Vimeo.

What does R3 do? Well, apparently it collaborates on commercial pilots. It also provides labs and a research center to support organizations during their innovation. R3 has its own technology (R3 Corda implementation) and own “path to production” methodology. So, perhaps some resources worth checking out.

Akin to what we have learned for customer insight and data science pilots, McNulty confirmed that the path to mainstream adoption will be a “burning platform.” What story will make the case for such an unacceptable status quo that organizations must make the leap to blockchain (to avoid the flames)?

He suggests a few pointers:

  • Collaboration is increasing, adding complexity;
  • The appetite of regulators in increasing, as they grasp the benefits of pushing for distributed ledgers as market solutions;
  • More work is needed on standards (but the dust is settling, and competition is reducing)
  • Will we get to cash on the blockchain? (probably more a move to digital assets on ledger being counted as monetary assets)
  • The real burning platform will probably be increased operating costs (currently $2.6 trillion annually, with blockchain promising 20% savings)

Despite all that, McNulty confirmed that most businesses are still only at pilot stage. But, apparently, some FS firms are having IT developers trained en masse (so that blockchain can be considered as just another technology option to meet business requirements).

Bursting the blockchain hype bubble

Next was a man who should seriously consider a second career in stand-up comedy. Dave Birch is innovation director for Consult Hyperion. He gave a hilarious comedy session on the hype around blockchain.

Using just genuine newspaper headlines, he revealed how blockchain is apparently the answer for every industry, transforming everything from banking to burgers and healthcare and ending global poverty. As an aside, he shared the amusing story of how Amex was conned during the “Great Salad Oil swindle” of 1963.

He used that as analogy to the crucial issue of how not to get swindled by hyped blockchain claims. The key, it appears, is to always ask: What’s in the blocks?

Birch also shared his four-layered model of a shared ledger:

  1. Contract (smart contract built upon)
  2. Consensus
  3. Content
  4. Communications (robust)

He described the lower three as a “consensus computer.” He also introduced a taxonomy of blockchain implementations. This was divided into a simple binary tree built on two layers of questions:

  • Is it a public or private ledger?
  • Is it permissioned or double-permissioned?

If you think about it, a shared ledger is really a practical example of the much talked about RegTech. Dave pointed out that a shared-ledger solution would have uncovered the Great Salad Oil Swindle, because the macro production numbers would have been unbelievable. A lot of the hype is misguided, because blockchain can’t fix individual problems, but it can spot systemic errors.

An interesting analogy he shared was an old idea of best way to avoid bank branch robberies. At the time when lots of architects were suggesting military-like protections for staff and vaults, one radical turn of the century designers suggested the opposite: a bank built mainly of glass. If everyone can see what is going on, the bank robber has nowhere to hide.

That is the principle of blockchain, the power of radical transparency. So, businesses may get more value thinking how to radically redesign, rather than just reengineer, existing database solutions into a blockchain app.

See also: Blockchain: What Role in Insurance?  

Getting back to the customer benefit of blockchain

Our final speaker brought us back to that emphasis during panel session – what is in it for the customer? (A topic that is preaching to the choir on this blog.)

Peter Ferry, commercial director at Wallet Services, suggested that blockchain is gradually becoming an invisible technology option. The focus will return to customer needs and business requirements, with IT departments worrying about when blockchain is the right technology solution for needs.

But when would it be relevant? How can blockchain make our lives simpler?

As Ferry rightly pointed out, the development of the internet and today’s digital applications should be a warning. Mostly, digital technology has not made our lives simpler; if anything, they are more complex and demanding. The internet has developed differently than was originally dreamed (distributed and robust network for military purposes).

Blockchain can potentially do a lot for customers, including: security by default, sovereignty of their own data and no single point of failure. Customer-focused design principles have to be applied to this enabling technology to deliver real value.

So, there is a strong case for customer insight teams to partner with blockchain development teams to help enable this.

For its part, Wallet Services used this event to launch its enabling technology. SICCAR can be thought of as Blockchain as a Service, including APIs, services and pre-fabricated business use cases. Might be worth checking out:

How will you approach the potential of blockchain for your business?

I hope this post was also useful, giving you food for thought and some useful resources/contacts.

Where are you on this journey? Are you still learning about blockchain?

Do you have plans to partner with blockchain development team? Are you already using customer insight to guide blockchain pilots?

If so, please let us know what’s working for you or any pitfalls to avoid (using the comments section below).

What Blockchain Means for Analytics

I recently had the pleasure of attending #CityChain17 (blockchain conference) at IBM’s SouthBank offices.

Chaired by Paul Forrest (chairman of MBN Solutions), the conference was an opportunity to learn about blockchain and how it is being applied.

In the past, I viewed the hype about blockchain (following excitement about Bitcoin its most famous user) as just another fad that might pass.

However, as more businesses have got involved in piloting potential applications, it’s become obvious that there really is something in this – even if its manifestations are now much more commercial than the hacking by Bitcoin fans.

CityChain17 brought together a number of suppliers and those helping shape the industry. It was a great opportunity to hear voices, at times contradictory,and see what progress has been made toward mainstream adoption. There was so much useful content that I made copious notes and will share a series of two blog posts on this topic.

So, without further ado, as a new topic for our blog, here is part 1 of my recollections from this blockchain conference.

Introducing blockchain and why it matters

The first speaker was John McLean from IBM. He reviewed the need that businesses have for a solution to the problem of increasingly complex business and market networks, with the need to securely exchange assets, payments or approvals between multiple parties. He explained that, at core, blockchain is just a distributed ledger across such a network.

In such a scenario, all participants have a regulated local copy of the ledger, with bespoke permissions to approve blocks of information.

However, he also highlighted that today’s commercial applications of blockchain differ from the famous Bitcoin implementation:

  • Such applications can be internal or external.
  • Business blockchain has identity rather than anonymity, selective endorsement versus proof of work and wider range of assets vs. a cryptocurrency.
  • Blockchain for businesses is interesting because of the existing problems it solves. Broader participation in shared ledger reduces cost and reconciliation workload. Smart contracts offer embedded business rules with the data blocks on the ledger. Privacy improves because transactions are secure, authenticated and verifiable. So does trust because all parties are able to trust a shared ledger – all bought in.
  • Several sectors are currently testing blockchain implementations, including financial services, retail, insurance, manufacturing and the public sector.

Finally, John went on to outline how IBM is currently enabling this use of blockchain technology (including through its participation in the Hyperledger consortium and its Fabric Composer tool).

See also: 5 Main Areas for Blockchain Impact  

Comparing blockchain to databases, anything new?

As someone who was involved in the early days of data warehouses and data mining, I was delighted to hear the next speaker (Dr. Gideon Greenspan from Coin Sciences) talk about databases. Acknowledging that a number of the so-called unique benefits of blockchain can already be delivered by databases, Gideon began by suggesting there had been three phases of solutions to the business challenges of exchanging and coordinating data:

  1. Peer-to-peer messaging
  2. Central shared database
  3. Peer-to-peer databases

He had some great examples of how the “unique benefits” of blockchain could be achieved with databases already:

  • Ensuring consensus in data (B-trees in relational databases)
  • Smart contracts (the logic in these equal stored procedures)
  • Append-only inserts (database that only allows inserts)
  • Safe asset exchanges (the ACID model of database transactions)
  • Robustness (distributed and massively parallel databases)

Even more entertaining, in a room that was mainly full of blockchain advocates, developers or consultants, Gideon went on to list what was worse about blockchain vs. databases:

  • Transaction immediacy (ACID approach is durable, but blockchains need to wait for consensus)
  • Scalability (because of checks, blockchain nodes need to work harder)
  • Confidentiality (blockchains share more data)

After such honesty and frankly geeky database technology knowledge, Gideon was well-placed to be an honest adviser on sensible use of blockchain. He pointed out the need to consider the trade-offs between blockchain and database solutions. For instance, what is more important for your business application:

  • Disintermediation or confidentiality?
  • Multiparty robustness or performance?

Moving to more encouraging examples, he shared a few that have promising blockchain pilots underway:

  1. An instant payment network (using tokens to represent money, it’s faster, with real-time reconciliation and regulatory transparency)
  2. Shared metadata solution (as all data added to the blockchain is signed, time-stamped and immutable – interesting for GDPR requirements, even if the “right to be forgotten” sounds challenging)
  3. Multi-jurisdiction processes (regulators are interested)
  4. Lightweight financial systems (e.g. loyalty schemes)
  5. Internal clearing and settlements (e.g. multinationals)

But a final warning from Gideon was to be on the watch for what he termed “half-baked blockchains.” He pointed out the foolishness of:

  • Blockchains with one central validator
  • Shared state blockchains (same trust model as a distributed database)
  • Centrally hosted blockchain (why not a centralized database?)

Gideon referenced his work providing the multichain open platform, as another source for advice and resources.

Blockchain is more complex, hence the need for technical expertise

A useful complement (or contradictory voice, depending on your perspective) was offered next. Simon Taylor (founder of 11:FS and ex-Barclays innovation leader), shared more on the diversity of technology solutions.

Simon is also the founder of yet another influential and useful group working on developing/promoting blockchain, the R3 Consortium. He credits much of what he has learned to a blogger called Richard Brown, who offers plenty of advice and resources on his blog:

One idea from Richard that Simon shared is the idea that different technology implementations of blockchain, or platforms for developing, are best understood as being on a continuum, from more centralized applications for FS (like Hyperledger and Corda) being at one end and the radically decentralized Wild West making up the other end (Bitcoin, z-Cash and Ethereum). He suggests the interesting opportunities lie in the middle ground between these poles (currently occupied by approaches like Stellar and Ripple).

Simon went on to suggest a number of principles that are important to understand:

  • The shared ledger concept offers better automated reconciliation across markets.
  • But, as a result, confidentiality is a challenge (apparently Corda et al. are solving this, but at the expense of more centralization).
  • No one vendor (or code-base/platform) has yet won.
  • It is more complicated than the advertising suggests, so look past the proof of concept work to see what has been delivered (he suggests looking at interesting work in Tel Aviv and at what Northern Trust is doing).

To close, Simon echoed a few suggestions that will sound familiar to data science leaders. There continues to be an education and skills gap. C-Suite executives recognize there is a lot of hype in this area and so are seeking people they can trust as advisers. Pilot a few options and see what approach works best for your organization.

He also mentioned the recruitment challenge and suggested not overlooking hidden gems in your own organization. Who is coding in their spare time anyway?

In his Q&A, GDPR also got mentioned, with a suggestion that auditors will value blockchain implementations as reference points with clear provenance.

See also: Why Blockchain Matters to Insurers  

Time for a blockchain panel

After three talks, we had the opportunity to enjoy a panel debate. Paul Forrest facilitated, and we heard answers on a number of topics from experts across the industry. Those I agreed with (and thus remembered) were Tomasz Mloduchowski, Isabel Cooke and Parrish Pryor-Williams.

I took the opportunity to ask about the opportunity for more cooperation between the data science and blockchain communities, citing that both technology innovations needed to prove their worth to the C-suite and had some overlapping data needs. All speakers agreed that more cooperation between these communities would be helpful.

Isabel’s team at Barclays apparently benefits from being co-located with the data science team, and Parrish reinforced the need to focus on customer insights to guide application of both technologies. What panelists appear to be missing is that, in most large organizations, blockchain is being tested within IT or digital teams, with data science left to marketing or finance/actuarial teams. This could mean a continued risk of siloed thinking rather than the cooperation needed.

An entertaining, question concerned what to do with all the fakes now rapidly adding blockchain as a buzzword to their CVs and LinkedIn profiles. Surprisingly, panelists were largely positive about this development. They viewed it as an encouraging tipping point of demand and a case that some will need to fake it ’til they make it. There was also an encouragement to use meetups to get up-to-speed more quickly (for candidates and those asking the questions).

The panel also agreed that there was still a lack of agreement on terms and language, which sometimes got in the way. Like the earlier days of internet and data science, there are still blockchain purists railing against the more commercial variants. But the consensus was that standards would emerge and that most businesses were remaining agnostic on technologies while they learned through pilots.

The future for blockchain was seen as being achieved via collaborations, like R3 and Hyperledger. A couple of panelists also saw fintech startups as the ideal contenders to innovate in this space, having the owner/innovator mindset as well as the financial requirements.

It will be interesting to see which predictions turn out to be right.

What next for blockchain and you?

How do you think blockchain develops, and do you care? Will it matter for your business? Have you piloted to test that theory?

I hope my reflections act as a useful contact list of those with expertise to share in this area. Let us know if this topic is something you would like covered more, on Customer Insight Leader blog.

That’s it for now. More diverse voices on blockchain in Part 2….