They may seem like curses of modern corporations, but org charts and regular reorganizations are now a fact of business life. I’m sure, as an insight leader, you will have seen your fair share. As you’ve risen up the hierarchy, you’ve probably changed your role, from recipient to author.
From my experience, two major opportunities exist for organizing customer insight functions.
The first is to bring together the different technical areas that can best collaborate to provide deeper insights that lead to more action. These include teams that are often located in different functional silos. In line with my definition of customer insight, I would recommend bringing together: customer data, analysis and modelling, research and database marketing teams. Suitably integrated and with a culture focused on outcomes, these teams can work together for an “insight engine” that produces not just technical output but actions that result in both commercial impact and improved customer experiences.
The second opportunity tends to come later in the maturity of a customer insight function. It is the centralization challenge. Whereas I would not encourage accelerating this (my experience is that insight teams drive more value when close to the business area they serve, with shared targets and emotional engagement), there do come times when it is appropriate. This will often be driven by wider corporate changes in line with simplification and cost reduction. But centralization can also be an opportunity. Integrating into one center of excellence on customer insight that drives consistent processes and coordination of customer interactions across lines of business can also drive value.
Here are some of the benefits and risks I’ve seen in these centralized models:
Benefits of a center of excellence:
Economies of scale in specialist technical work;
Career paths for more technical practitioners;
More independent overview from business partners;
Optimization and coordination of customer interactions.
Risks of a center of excellence:
Loss of knowledge about specific business areas (becoming an “ivory tower”);
Loss of a sense of belonging to a business area (engagement);
Inflexibility about different local needs (one best way);
Apparent bureaucracy — some things take longer (common process).
Interestingly, a poll we ran on customer insight found that all the leaders answering were running or part of a center of excellence. It would be interesting to hear from any customer insight leaders who are still successfully running a more federated or localized insight model.
Given the high need and growing demand for data scientists, there are definitely not enough of them. Accordingly, it is important to consider how an insurer might develop a core talent pool of data scientists. As it is often the case when talent is in short supply, acquiring (i.e., buying) data scientist talent is an expensive but fairly quick option. It may make sense to consider hiring one or two key individuals who could provide the center of gravity for building out a data science group. A number of universities have started offering specialist undergraduate and graduate curricula that are focused on data science, which should help address growing demand in relatively soon. Another interim alternative is to “rent” data scientists through a variety of different means – crowdsourcing (e.g., Kaggle), hiring freelancers, using new technology vendors and their specialists or consulting groups to solve problems and engaging consulting firms that are creating these groups in-house.
The longer term and more enduring solution to the shortage of data scientists is to “build” them from within the organization, starting with individuals who possess at least some of the necessary competencies and who can be trained in the other areas. For example, a business architect who has a computational background and acts as a liaison between business and technology groups can learn at least some of the analytical and visualization techniques that typify data scientists. Similarly, a business intelligence specialist who has sufficient understanding of the company’s business and data environment can learn the analytical techniques that characterize data scientists. However, considering the extensive mathematical and computational skills necessary for analytics work, it arguably would be easier to train an analytics specialist in a particular business domain than to teach statistics and programming to someone who does not have the necessary foundation in these areas.
Another alternative for creating a data science office is to build a team of individuals who have complementary skills and collectively possess the core competencies. These “insight teams” would address high-value business issues within tight time schedules. They initially would form something like a skunk works and rapidly experiment with new techniques and new applications to create practical insights for the organization. Once the team is fully functional and proving its worth to the rest of the organization, then the organization can attempt to replicate it in different parts of the business.
However, the truth is there is no silver bullet to addressing the current shortage of data scientists. For most insurers, the most effective near-term solution realistically lies in optimizing skills and in team-based approaches to start tackling business challenges.
Designing a data science operating model: Customizing the structure to the organization’s needs
To develop a data science function that operates in close tandem with the business, it is important that its purpose be to help the company achieve specific market goals and objectives. When designing the function, ask yourself these four key strategic questions:
Value proposition: How does the company define its competitive edge? Local customer insight? Innovative product offerings? Distribution mastery? Speed?
Firm structure: How diverse are local country/divisional offerings and go-to-market structures, and what shared services are appropriate? Should they be provided centrally or regionally?
Capabilities, processes and skills: What capabilities, processes and skills do each region require? What are the company’s inherent strengths in these areas? Where does the company want to be best-in-class, and where does it want to be best-in-cost?
Technology platform: What are the company’s technology assets and constraints?
There are three key considerations when designing an enterprisewide data science structure: (a) degree of control necessary for effectively supporting business strategy; (b) prioritization of costs to align them with strategic imperatives; and (c) degree of information maturity of the various markets or divisions in scope.
Determining trade-offs: Cost, decision control and maturity
Every significant process and decision should be evaluated along four parameters: (a) need for central governance, (b) need for standardization, (c) need for creating a center of excellence and (d) need for adopting local practices. The figure below illustrates how to optimize these parameters in the context of cost management, decision control and information maturity.
This model will encourage the creation of a flexible and responsive hub-and-spoke model that centralizes in the hubs key decision science functions that need greater governance and control, and harnesses unique local market strengths in centers of excellence. The model localizes in regional or country-specific spokes functions or outputs that require local market data inputs, but adheres to central models and structures.
Designing a model in a systematic way that considers these enterprise-wide business goals has several tangible benefits. First, it will help to achieve an enterprisewide strategy in a cost-effective, timely and meaningful way. Second, it will maximize the impact of scarce resources and skill sets. Third, it will encourage a well-governed information environment that is consistent and responsive throughout the enterprise. Fourth, it will promote agile decision-making at the local market level, while providing the strength of heavy-duty analytics from the center. Lastly, it will mitigate the expensive risks of duplication and redundancy, inconsistency and inefficiency that can result from disaggregation, delayed decision making and lack of availability of appropriate skill sets and insights.
Traditionally, in workers’ comp, nurse case management (NCM) services have been widely espoused yet misunderstood and underutilized. The reasons for underutilization are many. Tension between NCM and claims adjusters is one. Even though overburdened, adjusters often overlook the opportunity to refer to NCM.
Also to blame is the NCM process itself. In spite of professional certification for NCM, the process is poorly defined for those outside the nursing profession. More importantly, NCM has difficulty measuring and reporting proof of value.
Continuing to do business as usual is not acceptable. NCM needs to address several issues to qualify as legitimate contributors. First, NCM needs to articulate its value. To do that, NCM must computerize and standardize its process and measure and report outcomes, just like any other business in today’s world.
Too often, computerization for NCM is relegated to adding nurses’ notes to the claim system. However, such notes cannot be analyzed to measure outcomes based on specific nursing initiatives.
In most situations, an individual NCM interprets an issue, decides on an action and delivers the response. The organization’s medical management is thereby a subjective interpretation rather than a definable, quantifiable product.
Granted, the NCM is a trained professional. But when the product is unstructured, variables in delivery cannot be measured or appreciated. A process that is different every time can never be adequately defined.
It's crucial to establish organizational standards about what conditions in claims require referral to NCM—without exception. This will remove the myriad decisions made or not made by claims adjusters to involve the NCM. The referral can be automated through electronic claims monitoring and notification. NCM takes action on the issue according to organizational protocol, and the claims adjustor is notified.
When the conditions in claims that lead to intervention by NCM are computerized and standardized, the effects can be measured. Apples can legitimately be compared with apples, not to oranges and tennis balls. Similar conditions in claims are noted and approached the same way every time, so the results can be validly measured.
Results in claims such as indemnity costs, time from DOI to claim closure or overall claim cost can be compared before and after NCM standardization. Comparisons can be made across different date ranges for similar injuries going forward to measure continued effectiveness and hone the process.
Measuring outcomes is the most essential aspect of the process. Value is disregarded unless it is defined, measured and reported.
For non-NCMs, the dots in medical management must be connected to see the picture. Describe what was done, why it was done and how it was done the same way for similar situations and in context with the organization's standards. Then report the outcome value. Establish a continuing value communication process.
NCM constituencies should be informed in advance of the process and outcome measurements. Define in advance how problems and issues are identified and handled and how results will be measured. Then proceed consistently.
Recognized NCM value
Even as things now stand, NCM's value is being recognized. American Airlines recently reported it is adding NCM to their staff and will refer all lost time claims. The company cited a pilot project where nurse interventions were documented and measured, proving their value in getting injured workers back to work.
Christopher Flatt, workers’ compensation Center of Excellence leader for Marsh Inc., wrote in WorkCompWire (http://www.workcompwire.com/), “One option that employers should consider as part of an integrated approach to controlling workers’ compensation costs is formalized nurse case management. Taking actions to drive down medical expenses is an essential component to controlling workers’ compensation costs.”1
Industry research and corporate or professional wisdom regarding risky situations can supply the standardized indicators for referral to NCM. American Airlines uses the standard that all lost time claims should be referred to NCM. But there are many, sometimes more subtle, indicators of risk and cost in claims that can be identified early through computerized monitoring and referred for NCM intervention.
Another example of developing standard indicators for referral is based on industry research that shows certain comorbidities, such as diabetes, can increase claim duration and cost. These claims should also be referred to NCM. Yet another example is steering away from inappropriate medical providers who can profoundly increase costs.
As a long-ago nurse and a longer-time medical systems designer and developer, I believe the solution lies in appropriate computerized system design. The elements need to be simple to implement, easy to use and consistently applied. Only then can NCM offer proof of value.