Tag Archives: data science

The Science (and Art) of Data, Part 2

Given the high need and growing demand for data scientists, there are definitely not enough of them. Accordingly, it is important to consider how an insurer might develop a core talent pool of data scientists. As it is often the case when talent is in short supply, acquiring (i.e., buying) data scientist talent is an expensive but fairly quick option. It may make sense to consider hiring one or two key individuals who could provide the center of gravity for building out a data science group. A number of universities have started offering specialist undergraduate and graduate curricula that are focused on data science, which should help address growing demand in relatively soon. Another interim alternative is to “rent” data scientists through a variety of different means – crowdsourcing (e.g., Kaggle), hiring freelancers, using new technology vendors and their specialists or consulting groups to solve problems and engaging consulting firms that are creating these groups in-house.

The longer term and more enduring solution to the shortage of data scientists is to “build” them from within the organization, starting with individuals who possess at least some of the necessary competencies and who can be trained in the other areas. For example, a business architect who has a computational background and acts as a liaison between business and technology groups can learn at least some of the analytical and visualization techniques that typify data scientists. Similarly, a business intelligence specialist who has sufficient understanding of the company’s business and data environment can learn the analytical techniques that characterize data scientists. However, considering the extensive mathematical and computational skills necessary for analytics work, it arguably would be easier to train an analytics specialist in a particular business domain than to teach statistics and programming to someone who does not have the necessary foundation in these areas.

Another alternative for creating a data science office is to build a team of individuals who have complementary skills and collectively possess the core competencies. These “insight teams” would address high-value business issues within tight time schedules. They initially would form something like a skunk works and rapidly experiment with new techniques and new applications to create practical insights for the organization. Once the team is fully functional and proving its worth to the rest of the organization, then the organization can attempt to replicate it in different parts of the business.

However, the truth is there is no silver bullet to addressing the current shortage of data scientists. For most insurers, the most effective near-term solution realistically lies in optimizing skills and in team-based approaches to start tackling business challenges.  

Designing a data science operating model: Customizing the structure to the organization’s needs

To develop a data science function that operates in close tandem with the business, it is important that its purpose be to help the company achieve specific market goals and objectives. When designing the function, ask yourself these four key strategic questions:

  • Value proposition: How does the company define its competitive edge?  Local customer insight? Innovative product offerings? Distribution mastery? Speed?
  • Firm structure: How diverse are local country/divisional offerings and go-to-market structures, and what shared services are appropriate? Should they be provided centrally or regionally?
  • Capabilities, processes and skills: What capabilities, processes and skills do each region require? What are the company’s inherent strengths in these areas? Where does the company want to be best-in-class, and where does it want to be best-in-cost?
  • Technology platform: What are the company’s technology assets and constraints?

There are three key considerations when designing an enterprisewide data science structure: (a) degree of control necessary for effectively supporting business strategy; (b) prioritization of costs to align them with strategic imperatives; and (c) degree of information maturity of the various markets or divisions in scope.

Determining trade-offs: Cost, decision control and maturity

Every significant process and decision should be evaluated along four parameters: (a) need for central governance, (b) need for standardization, (c) need for creating a center of excellence and (d) need for adopting local practices. The figure below illustrates how to optimize these parameters in the context of cost management, decision control and information maturity.

This model will encourage the creation of a flexible and responsive hub-and-spoke model that centralizes in the hubs key decision science functions that need greater governance and control, and harnesses unique local market strengths in centers of excellence. The model localizes in regional or country-specific spokes functions or outputs that require local market data inputs, but adheres to central models and structures.

Designing a model in a systematic way that considers these enterprise-wide business goals has several tangible benefits. First, it will help to achieve an enterprisewide strategy in a cost-effective, timely and meaningful way. Second, it will maximize the impact of scarce resources and skill sets. Third, it will encourage a well-governed information environment that is consistent and responsive throughout the enterprise. Fourth, it will promote agile decision-making at the local market level, while providing the strength of heavy-duty analytics from the center. Lastly, it will mitigate the expensive risks of duplication and redundancy, inconsistency and inefficiency that can result from disaggregation, delayed decision making and lack of availability of appropriate skill sets and insights.

The Science (and Art) of Data, Part 1

Most insurers are inundated with data and have difficulty figuring out what to do with all of it. The key is not just having more data, more number-crunching analysts and more theoretical models, but instead identifying the right data. The best way to do this is via business-savvy analysts who can ask the right strategic questions and develop smart models that combine insights from raw data, behavioral science and unstructured data (from the web, emails, call center recordings, video footage, social media sites, economic reports and so on). In essence, business intelligence needs to transcend data, structure and process and be not just a precise science but also a well-integrated art.

The practitioners of this art are an emerging (and rare) breed: data scientists. A data scientist has extensive and well-integrated insights into human behavior, finance, economics, technology and, of course, sophisticated analytics. As if finding this combination of skills wasn’t difficult enough, a data scientist also needs to have strong communication skills. First and foremost, he must ask the right questions of people and about things to extract the insights that provide leads for where to dig, and then present the resulting insights in a manner that makes sense to a variety of key business audiences. Accordingly, if an organization can find a good data scientist, then it can gain insights that positively shape its strategy and tactics – and gain them more quickly than less-well-prepared competitors.

What it takes to be an effective data scientist

The following table highlights the five key competencies and related skills of a qualified data scientist.

Competencies

Key Skills

Business Impact

1. Business or Domain Expertise

   Deep understanding of:

  • Industry domain, including macro-economic effects and cycles, and key drivers;
  • All aspects of the business (marketing, sales, distribution, operations, pricing, products, finance, risk, etc.).
  • Help determine which questions need answering to make the most appropriate decisions;
  • Effectively articulate insights to help business leadership answer relevant questions in a timely manner.

2. Statistics

  • Expertise in statistical techniques (e.g., regression analysis, cluster analysis and optimization) and the tools and languages used to run the analysis (e.g., SAS or R);
  • Identification and application of relevant statistical techniques for addressing different problems;
  • Mathematical and strategic interpretation of results.
  • Generate insights in such a way that the businesses can clearly understand the quantifiable value;
  • Enable the business to make clear trade-offs between and among choices, with a reasonable view into the most likely outcomes of each.

3. Programming

  • Background in computer science and comfortable in programming in a variety of languages, including Java, Python, C++ or C#;
  • Ability to determine the appropriate software packages or modules to run, and how easily they can be modified.
  • Build a forward-looking perspective on trends, using constantly evolving new computational techniques to solve increasingly complex business problems (e.g., machine learning, natural language processing, graph/social network analysis, neural nets, and simulation modelling);
  • Ability to discern what can be built, bought or obtained free from open source and determine business implications of each.

4. Database Technology Expertise

  Thorough understanding of:

  • External and internal data sources;
  • Data gathering, storing and retrieval methods (Extract-Transform-Load);
  • Accessing data from external sources (through screen scraping and data transfer protocols);
  • Manipulating large big data stores (like Hadoop, Hive, Mahoot and a wide range of emerging big data technologies).
  • Combine the disparate data sources to generate very unique market, industry and customer insights;
  • Understand emerging latent customer needs and provide inputs for high-impact offerings and services;
  • Develop insightful, meaningful connections with customers based on a deep understanding of their needs and wants.

5. Visualization and Communications Expertise

Comfort with visual art and design to:

  • Turn statistical and computational analysis into user-friendly graphs, charts and animation;
  • Create insightful data visualizations (e.g., motion charts, word maps) that highlight trends that may otherwise go unnoticed;
  • Use visual media to deliver key message (e.g., reports, screens – from mobile screens to laptop/desktop screens to HD large visualization walls, interactive programs and, perhaps soon, augmented reality glasses).
  • Enable those who aren’t professional data analysts to effectively interpret data;
  • Engage with senior management by speaking their language and translating data-driven insights into decisions and actions;
  • Develop powerful, convincing messages for key stakeholders that positively influence their course of action.

While it may seem unrealistic to find a single individual with all the skills we've listed, there are some data scientists who do, in fact, fit the profile. They may not be equally skilled in all areas but often have the ability to round out their skills over time. They typically tend to be in high-tech sectors, where they have had the opportunities to develop these abilities as a matter of necessity.

However, because of the increasing demand for data scientists and their scarcity, insurers (and companies in other industries) should consider if they want to build, rent or buy them. Although buying or renting capabilities can be viable options – and do offer the promise of immediate benefits – we believe that building a data science function is the best long-term approach. Moreover, and as we will address in our next post, in light of the shortage of data scientists, a viable approach is creating a data science office of individuals who collectively possess the core competencies of the ideal data scientist.