Tag Archives: CI

Chatbots Aren’t Dead, but I Wish…

Around two years ago, the term “chatbot” shot into our vocabulary and onto the agendas of CIOs and CMOs everywhere. The idea that a customer could simply “chat” with a robot any time, anywhere made so much sense — or did it? Any technology solution or product implemented without a clear problem in mind is just wasteful. And it is this lack of planning that put chatbots on a fast path to nowhere in many companies.

Two and a half years ago, there were only a handful of chatbot providers. A year ago, there were thousands. Any remotely adept coder could whip a bot together in a few hours and, surprise, surprise, VCs went in hot pursuit of companies to fund.

Fast forward to today, and we’re constantly hearing the phrases “our chatbot proof of concept was not what we hoped,” or “we tried chatbots, and they didn’t work” rolling off the tongues of those same CIOs and CMOs.

But we’re not surprised. In fact, we welcome the demise of pointless technology. When we last checked, Facebook Messenger had more than 100,000 chatbots. Many of them are failing to impress, leaving users underwhelmed and frustrated.

See also: Chatbots and the Future of Interaction  

Automation needs a purpose

So, is this the end of chatbots?

It certainly is the end of companies creating chatbots for the sake of having a chatbot. But it is the beginning of a major technology shift, a quasi-revolution called AI-based automation, and chatbots certainly have an important role to play.

Companies waste resources when they implement new technologies without first establishing an actual problem to solve. The same theory applies to automation, AI and chatbots.

For chatbots to survive, they have to solve a business problem. Period. Executives must clearly define this problem and distill it into real use cases that have true ROI or Net Promoter Score implications — meaningful implications.

As soon as a team clearly maps out the use cases, the case for automation comes next. Can the company solve this problem by removing the human element in the back end? If so, there will undoubtedly be a cost benefit to the company. A smart design here will allow for escalation to human agent in the (let’s hope) shrinking contact center.

Once the higher-ups give automation the green light, the company must spin up myriad other technologies to create an effective system that solves the problem in the long term. As an example, if the business problem were around customer service and the use case were automating bill pay, then payment gateways, an asynchronous messaging channel, an authentication system, encryption and privacy layer, feedback loop, API bridge into the billing system and others would need to work in unison to provide a complete solution.

Rethinking the word ‘chatbot’

You’re now probably wondering where the chatbot comes in. Well, therein lies the point of this article: A chatbot only has a role to play if it delivers utility to the customer. In the case of bill pay, the visual experience the bot presents to a consumer is in the form of a chat. Developers program this conversation inside the chatbot using either decision trees or natural language understanding.

See also: How Chatbots Change Open Enrollment  

If I had one wish for this industry, it would be that we get rid of the term “chatbot” and instead call this user interface built around conversations a CI, or conversational interface.

CIs done properly, with a true business problem in mind, will reach deep into the back end through a persistent and secure messaging channel, allowing the customer to do business — any time, anywhere and, most importantly, happily.

Missed Opportunity for Customer Insight

Customer insight (CI) teams can take different forms in different businesses (partly rightly, to reflect the needs of that business). One such variation is reporting line. Some CI teams report into operations, sales, IT or even finance. However, by far the most common reporting line is into marketing.

See also: 3 Skills Needed for Customer Insight  

That makes sense to me, as over the years I have seen more and more applications for customer insight across the marketing lifecycle. Increasingly, marketing teams are realizing that use of data, analytics, research and database marketing techniques is part of their role. Sadly, these technical teams are, too often, still separated. But at least there are signs of collaboration.

Marketing Automation:

Companies and leaders also recognize different applications of insight to marketing. Some focus on early-stage roles in strategic decisions, some on proposition development and some on campaign execution or marketing measurement. Very few appear to use customer insight in all they do.

Meanwhile, one of the trends of recent years has been the adoption of marketing automation systems. In some cases, the term has almost been used to replace the infamous customer relationship management (CRM) system. But, for many businesses, it is more about bringing a structured workflow, resource management and quality controls to the work of marketing teams. Talking with consultants who specialize in helping businesses implement marketing automation systems (none appear to work straight out of the box) reveals a sadly lacking focus on customer insight.

This is such a missed opportunity. The marketing workflow needed by today’s business requires input, validation, targeting or measurement at almost every stage. But it seems that marketing automation designs are not routinely embedding customer insight deliverables into marketing processes.

Regulation:

It is perhaps surprising that more focus has not been put on automating routine use of insight in marketing, given the regulatory environment.

Whether you consider certain vertical markets (like the role of the Financial Conduct Authority), or the higher hurdles coming to all data uses (with the adoption of general data protection regulation, or GDPR, principles), marketers will need more evidence. Those data marketers keeping up-to-date with their professional responsibilities will realize they need to evidence suitability of their offerings, targeting of their communications and appropriate use of data.

Where’s the gap?

So, in what parts of the marketing lifecycle are marketers neglecting to use customer insight? Where are the most important gaps?

Based on my consultancy work, often helping companies design their customer insight strategy, I would identify the following common gaps:

Participation decisions:

  • Either not having a clear understanding of market segments, or not making participation (product categories or distribution channels) based on segment fit or size of appeal.

Communication design:

  • The use of insight generation has grown for product design (as per our recent series), but too few marketing teams also use that same insight generation to design their communication.

Communication testing:

  • Quite often this is left to ad hoc qualitative research, with insufficient use of techniques like eye-tracking or quantitative experimentation at concept stage.

Event triggers:

  • Identified as important to targeting in two recent research reports, from the DMA & MyCustomer/DataIQ, event triggers deserve to be more widely used in targeting marketing campaigns. For further thoughts on why you don’t just need propensity models, see previous posts on both events and propensity models.

Holistic marketing measurement:

  •  As more and more marketing directors are expected to report on their return on investment (ROI) or return on marketing expenditure (ROME), once again insight can help. Not just the traditional role of database marketing practices, in reporting incremental return against control groups, but also, increasingly, the design of holistic measurement program (converging evidence from brand tracking, econometrics and other data sources). This previous post shares some more detail on that.

Will you be insightful or ignored?

In closing, I’d encourage all customer insight leaders to get closer to those leading marketing in their businesses. Marketing will become increasingly challenging over the next 12 months. CI leaders have the potential to become trusted advisers who can support marketing directors in navigating those choppy waters.

See also: The 4 Requirements for Customer Insight  

To return to the theme of regulation. I once more advise readers to not underestimate the potential impact of the EU’s general data protection regulation (GDPR) on their businesses. Despite Brexit, every commentator seems to agree that this regulation will affect U.K. businesses. The most eye-catching element may be the scale of potential fines (as much as 4% of global annual revenue), but the changes to consent may affect marketers more. The new hurdle will be proving positive unambiguous consent. Many businesses may conclude they need to move to opt-in for all marketing content.

So, going forward, the biggest threat to marketers (those not embedding insight into their processes) may not just be losing customers. It may be losing the right to talk to them!

The Dangers of Public Segmentations

Recently, it seems that developing public segmentations of your customers or citizens and then sharing it for all to see is becoming fashionable.

In part, this is to be applauded and welcomed.,/p>

The trend highlights a key tool within the customer insight toolkit, encourages greater focus on understanding people and embraces the need for greater transparency. However, there is also an inherent risk, that readers fail to understand the purpose, design and limitations of such segmentations and thus unwittingly apply them where they will not help.

This reminds me of a time many years ago when psychometric segmentations were very popular in business circles. Myers Briggs (MBTI) and many other profiles were enthusiastically applied and team members categorized into their “type.” Sadly, all too often, this perception about some important differences between team members was filed away following the team-building exercise and never used again. Screening interview candidates via psychometric segments was also “flavor of the month” at one stage, although I hear it being much more rarely used now (or only as part of a mix of “facts” to be considered).

Perhaps part of the problem can be a misunderstanding of the role of segmentation. As posted previously, segmentation is just one of a number of statistical tools available, and each segmentation will be designed to achieve a particular purpose. For this reason, more than one segmentation of customers may be entirely appropriate and insightful for a business that is able to handle such complexity (though most business leaders dislike this idea).

But let’s return to reviewing some of those recently published public segmentations. The first one I want to consider is the Consumer Spotlight segmentation published by the FCA.

While this appears a useful segmentation to help the FCA understand and focus on more vulnerable segmentation with regard to financial understanding or access, it is also important to recognize its limitations. A 10-segment model will only ever be appropriate for understand macro attitudes and behaviors. My own experience of segmenting consumers within different product markets tells me that both attitudes and behaviors can vary widely once you drill down to specific needs or products. So, it’s important to realize that this segmentation has been designed to focus on dimensions like vulnerability, detriment and financial risk. Thus it is most relevant for the FCA itself, to help target communications.

A second example is a commercial business taking such a public approach to sharing a segmentation. It is the Centre for the Modern Family segmentation funded by Scottish Widows.

This is another interesting segmentation, as it seeks to highlight and track changing social attitudes, family structures and pressures on modern families of many different types. However, once again it is important to realize the limitations of this survey. It is an attitudinal segmentation, constructed from a combination of “qual and quant” survey results, interpreted by an expert panel drawn from academia, social care and commerce. As such, this is a subjective perspective evidenced by self-reported attitudes and behaviors. Although such an understanding can be very rich, the inability to overlay this segmentation onto customer databases means that actual behavior cannot be verified or targeted actions or communications executed (often a drawback of attitudinal segments).

My final example is from the UK government. There are two I could have chosen here, as they have also recently published a segmentation on “climate change and transport choices,” but I’ve chosen to highlight the segmentation exercise published in regard to the problem of digital exclusion.

Once again, it’s encouraging to see this segmentation exercise being undertaken and the transparency regarding approach and progress. However, it does also appear to run the risk of a number of other “hybrid segmentations.” That is the risk that certain differences highlighted in various research studies or other sources are “cherry picked” to construct a patchwork quilt of apparently rich understanding that is not evidenced on a consistent basis. This can be seen in the infographic embedded in the above article. Even constructing a behavioral/demographic framework for a segmentation on that basis and then consistently surveying each segment runs the risk of masking important differences because of the averaging effect of artificially constructed segments. It will be interesting to see how government advisers and agencies avoid those risks.

I hope you found that interesting and are also engaged with the level of focus on segmentation in today’s government and media. If these are approached carefully and interpreted appropriately, they should be another driver of greater influence and seniority for customer insight leaders. That is our cause celebre.

How Effective Is Your Marketing?

When speaking about the power of having different technical disciplines converge to yield customer insights, it’s common to focus on analytics and research.

However, another rich territory for seeing the benefit of multiple technical disciplines to deliver customer insight (CI) is measuring how effective your marketing is.

One reason for calling on the skills of two complementary CI disciplines is the need to measure different types of marketing spending. The most obvious example is probably the challenge of measuring the effectiveness of “below the line” vs. “above the line” marketing. For those not so familiar with this language, born out of accounting terminology, the difference can perhaps be best understood by considering the “purchase funnel.”

Most, if not all, marketers will be familiar with the concept of a purchase funnel. It represents the steps that need to be achieved in a consumer journey toward making a purchase. Although often now made more complex, to represent the nuanced stages of online engagement/research or the post-sale stages toward retention/loyalty, at its simplest a purchase funnel represents four challenges. These are to reach a mass of potential consumers and take some on the journey through awareness, consideration and preference to purchase. The analogy of the funnel represents that fewer people will progress to each subsequent stage.

Back to our twin types of marketing: Above-the-line marketing (ATL) is normally the use of broadcast or mass media to achieve brand awareness and consideration for meeting certain needs. Getting on the “consideration list,” if you will. Traditionally, ATL was often TV, radio, cinema, outdoor and newspaper advertising. Below-the-line marketing (BTL) is normally the use of targeted direct marketing communications to achieve brand/product preference and sales promotions. Traditionally, this was often direct mail, outbound calling and email marketing. In recent years, many marketers talk in terms of “through-the-line” (TTL) advertising, which is an integrated combination of ATL and BTL messages for a campaign. Social media marketing is often best categorized as TTL, but elements can be either ATL or BTL, largely distinguished by whether you can measure who saw the marketing and have feedback data on their response.

Let’s return to the theme of using multiple CI disciplines to measure the effectiveness of these different types of marketing. The simpler example is BTL. Here, the data that can be captured on both who was targeted and how they behaved enables the application of what is called the experimental or scientific method. In essence, the skills of database marketing teams, to set-up campaigns with control cells and feedback loops. To merge the resulting data and evidence incremental changes in behavior as a result of the stimuli of marketing campaigns and optimize future targeting.

ATL is more of a challenge. Because control cells do not exist and it is impossible to be certain who saw the marketing, the comparison needs to be based on time series data. Here, the expertise of analytics teams comes to the fore, especially econometric modeling. This can be best understood as a set of statistical techniques for identifying which of many possible factors can best explain changes in sales over time and then the ability to combine these into a model that can predict future sales patterns based on those inputs. There are many skills needed here, and the topic is worthy of a separate post, but for now suffice to say that analytical questioning techniques to elicit potential internal and external factors are as important as modelling skills.

I hope you can see that my definition of today’s TTL marketing campaigns thus necessitates making use of both database marketing and analytics team skills to measure marketing effectiveness. But beyond this simply being a division of labor between ATL elements being measured by analytics teams and BTL by database marketing ones, there is another way they need to work together.

Reaching the most accurate or helpful marketing attribution is an art as much as a science. In reality, even BTL marketing effectiveness measurement is imprecise (because of the complexities of media interdependencies and not knowing if the consumer really paid attention to communications received). In a world where your potential consumers are exposed to TTL marketing with omni-channel options of response, no one source of evidence or skill set provides a definitive answer. For that reason, I once again recommend convergence of customer insight evidence.

Best practice is to garner the evidence from: (a) incremental behavior models (econometric or experimental method); (b) sales reporting (reconciling with finance numbers); (c) market position (research trackers); (d) media effectiveness tracking (reconciling with behavior achieved throughout purchase funnel).

Converging all this evidence, provided by data, analytics, research and database marketing, provides the best opportunity to determine robust marketing attribution. But do keep a record of your assumptions and hypotheses to be tested in future campaigns.

I hope that was helpful. How are you doing at measuring the effectiveness of your marketing? I hope you’re focused on incremental profit, not “followers.”

Creating a Customer-Insight Strategy

Too few companies have a customer strategy, let alone a customer-insight (CI) strategy. At least, that’s my experience.

In fact, many business strategies that I’ve seen, which seek to pepper their presentation with customer language, are really channel strategies or product strategies that reflect the silos in that business.

This is unfortunate, as most CEOs would acknowledge the critical importance of having their business understand, acquire, satisfy and retain customers (ideally, converting them into advocates). But perhaps the lack of customer insight in strategies reflects that may boardrooms have not had an empowered and articulate customer leader (or, better still, CI leader) to identify the need and drive the change.

As a small contribution to fill this gap, let me share a few reflections on what I have found helpful to consider when creating a customer-insight strategy.

At its simplest, strategy is just a series of decisions about “what you are going to do.” This mindset can help avoid too much theorizing with pretty diagrams and ensure your strategy leads to an implementation plan that can be executved. As a simple framework, it can help to consider three overlapping sets that you need to consider for a CI strategy:

Strategic Alignment: Although a CI strategy can inform and guide business and marketing strategies (from an understanding of consumers, your target market, their perceptions, unmet needs and channel usage), normally those exist prior to creating a CI strategy. So, a first priority is to ensure alignment.

How can customer insight help achieve the goals of the business strategy? What does the business need to understand better to deliver the marketing strategy? How can the work that aligns best with top strategic priorities be prioritized for the CI function. Is there other work that the CI function is doing that can be stopped or reduced given its low alignment with strategic priorities? All these elements should be thought through to decide what is included within CI strategy.

Your business and marketing strategy have likely been shaped, at least in early stages by PEST (political, economic, socio-cultural and technological), SWOT (strengths, weaknesses, opportunities and and threats) and other tools to analyze internal and external factors. Similarly, in summarizing what the CI strategy should be (aligned to business and marketing strategies) it is useful to see what use of CI is working for others businesses (here, lessons can often by learned outside your sector) and summarize what CI work has been most effective previously for you (on the basis of commercial return and improved customer feedback).

Both of these approaches should help identify priority work areas where CI can make a difference and help deliver the business and marketing strategies.

Operational Effectiveness: This is all about organization and processes. How does the CI function operate?

Once again, it is useful to both look internally, capturing what has really happened already, and externally (this time for best practice models). Given the relative immaturity of CI in academic terms and lack of common language or focus from “marketing experts,”‘ it can be hard to find the textbook answer for the customer insight best practice model. However, I have found a few of the benchmarking models used by technology research companies and marketing professional bodies useful and have produced my own (on the basis of 13 years experience in creating and leading such functions).

However you come by a best practice model with which you are comfortable, your next step should be the familiar approach for gap analysis. Summarize your current practice, compare and contrast with the best-practice model and then prioritize the gaps you find. Prioritization here needs to be informed by what you will be using your CI function to achieve (as summarized in the strategic alignment section). This review and gap analysis should consider not just the processes for getting different items of work delivered but also the organizational structure of the team. Despite some leaders claiming the structure does not matter if you have a unifying vision and the right attitude, all my experience teaches me that it does. Human beings are inherently tribal, and the quality of CI output is strongly affected by inter-disciplinary cooperation.

People Leadership: That mention of departmental structure brings us neatly onto focusing on the people in your CI team(s). Too often, strategy documents, even if they manage to translate the conceptual into the practical, fail to then consider the people side of change. To deliver the priorities identified in your strategic alignment review requires not just appropriate structures and effective processes but also the right people and culture. A good place to start can be a review of the current people in roles, comparing them with the ideal roles and skills required to deliver the work needed. Such a review should seek to consider people’s generic competencies and wider skills than they may be asked to use in current roles, as well as critically assess their attitude and fit with the team.

But beyond just the right individuals, success will depend on those people coming together to form effective teams, and that is more about culture than what is written down. I like to think of culture as “what happens ’round here when people aren’t being watched.” Various approaches have been tried to impose or encourage the culture wanted in a team, but I’ve found little works as well as empowering the people themselves to create the culture in which they want to work. An effective people leader is needed, who can communicate a clear vision and make decisions, but the leader will often be most successful when working with suitably skilled individuals to together define the team culture they want and how that can be encouraged. Truly listening to the wisdom of those doing the work, recognizing and rewarding the behavior sought and giving time to developing people and fixing environmental irritants will all encourage this.

None of this is easy. But being in a position to articulate to your team and your boss and the board a coherent customer-insight strategy (which explains how it enables business objectives, operates effectively and gets the best out of the people in the function) can be powerful.