Tag Archives: data analysis

Product Managers Needed for Analysis?

I mentioned, in a debrief from the Data Leaders Summit, the rise of the product manager role within data science teams.

This surprised me. I’ve become used to hearing about the need for more data engineers or analysts to complement data scientists. But the focus on product managers and product development life-cycles was a new one.

This was not an isolated incident from only a speaker or two. Many leaders confirmed that they had product manager roles. What is going on?

In this post, I will share a combination of my initial thoughts and resources I have discovered. I hope to help  you decide whether product managers are needed in your team.

What Is a Data Science Product Manager?

Let’s define what is meant by this new job title.

Some data science teams have matured beyond offering advice. Their output was no longer decision support analysis, providing models or insight to influence leaders. Increasingly, these teams were making products. These could be deployable models (for decisions, optimization, categorization) or even entire automated processes. Developing and deploying these into live business operation requires some additional skills. It is to meet that need that product manager roles have evolved.

Taken from the historical role of product managers in operational or marketing teams, these roles own a life cycle, from initial innovation (e.g. insight generation sessions) through design and development into deployment.

See also: 5 Key Effects From AI and Data Science

This article, from the IoT for All blog, helps bring the role to life. It is not prescriptive (as frankly the role is still evolving) but highlights some of the key skills needed.

The references to facilitation and communication skills reminded me of the need for softer skills. Those matter across so many data science or analytics roles. But the product manager skillset also reminded me of how I used to define analytics business partners. One key difference is the judgment and knowledge needed to manage a production line and pilots.

How Do You Develop Data Science Products?

How do product managers and others develop data science products?

A number of skills are needed, and the most appropriate development methodology will vary by business. But product managers sound some common themes.

I’ve heard speakers draw on influences from analytics, systems thinking, agile working and design thinking and stress the role of product development workflow.

In this article from Harvard Business ReviewEmily Glassberg Sands shares a high-level view of how to build great data products.

Is This an Opportunity for Other Product Managers?

Given the emphasis on product management skills, does this role represent an opportunity for product managers working outside any data or analytics field.

My experience with crossovers is mixed, but the data science product manager may be a different case. The mastery of product development and management skills appears to be key.

See also: The Entrepreneur as Leader and Manager  

This interesting blog post from Cohort Plus reads as if aimed at product managers in the technology space but is still a useful introduction for others in such a role.

If you are a product manager and interested in making the move into a data science team, this introduction should help (apologies, but posts from Medium will not display snippets).

Do You Need a Data Science Product Manager?

It would be great to have comments or feedback from both those who see the value of a product manager in these teams and those who think it’s a fad.

I’m sure more roles will evolve as these teams mature. Customer Insight Leader blog will keep a weather eye on ones that matter.

Do You Know Who Your Best Doctors Are?

In workers’ compensation, the medical provider network philosophy has been in place for years. Most networks were developed using the logic that all doctors are essentially the same. Rather than evaluate performance, the focus was on obtaining discounts on bills, thereby saving money.

Physician selection by adjusters and others has frequently been based on subjective criteria. Those include familiarity, repetition, proximity and sometimes just assumption or habit. Often the criteria is something as flimsy as, “We always use this doctor,” or “The staff returns my calls.” The question is, which doctors really are best, and why?

The first assumption that must be debunked is that discounts save money. Doctors are smart—no argument there. So to make up the lost revenue for discounted bills, they increase the number of visits or services to the injured worker or extend the duration of claims by prolonging treatment. To uncover these behaviors, examine the data.

Amazingly, even doctors do not always make the best choices about other doctors. They may recommend doctors they know socially, professionally or by informal reputation, but they may not know how the doctors actually practice. They may not know a physician upcodes bills, dispenses medications or over-prescribes Schedule II drugs. The data will reveal that information.

Doctors may be unaware they are adding to claim complexity by referring to certain specialists. Again, familiarity and habit are often the drivers. On the other hand, duplicity among providers is fraudulent behavior, and it can be uncovered by examining the data.

Analysis of data can expose clustering of poorly performing, abusive or fraudulent providers referring to one another. The analysis may also divulge patterns of some providers associated with certain plaintiff attorneys.

Treating doctors influence claims and their outcomes in other ways. Management indicators unique to workers’ compensation such as return to work, indemnity costs and disability ratings can be analyzed in the data to spotlight both good and poor medical performance. These outcome indicators are either directed by or influenced by the physician, and they can be uncovered through data analysis.

Claims adjusters and other non-medical persons simply cannot evaluate the clinical capability of medical providers, especially doctors. Performance analysis must take place at a higher level. Evaluations for specific ICD-9 diagnoses and clinical procedures such as surgery must be made. Frequency, timing and outcome can be examined in the data in context with diagnoses and procedural codes, thereby disclosing the excellence or incompetency of physicians.

Negative clinical outcomes that can be analyzed include hospital readmissions, repeated surgery or infection. Physicians associated with negative medical outcomes should be avoided.

When analyzing clinical indicators for performance, care should be taken to compare only similar conditions and procedures. Without such discrimination, the results are dubious. Specificity is critical.

When using data analysis to find the best doctors and other medical providers, fairness is also important. Provider performance should be compared only with similar specialty providers for similar diagnoses and procedures. Results will not be accurate or reliable if performance analysis is not apples-to-apples.

Medical providers may question data analysis to evaluate performance, claiming they treat the more difficult cases. The data can be analyzed to determine diagnostic severity, as well. Diagnostic codes in claims can be measured and scored, thereby disclosing medical severity.

Now is the time to step up to a much more dignified and sophisticated approach to selecting medical providers. Decisions about treating physicians must be based on fact, not assumption or habit. Fortunately, the data can be analyzed to locate the best-in-class and expose the others.

Data Analytics Comes of Age for Agents

Sitting down for lunch with one of our top independent agents, I asked him about his business.  

“Things are great – we’re totally paperless now!” he responded triumphantly.

“So what are you doing with all of the data you’re collecting?” I asked.

“Oh, I’m too small to do any of that stuff,” he said with a shrug.

“You’re not,” I said. “In fact, it’s a powerful way for you to generate more business. Let me show you how….”

“Data analytics” sounds like rocket science—sophisticated, expensive, intimidating and beyond the reach of the typical independent agency. It isn't. Data analytics is simply the analysis of data that allows a person to make a better decision than they could without data.

The challenge occurs when there is so much data available that it becomes difficult to determine what information is relevant and what is not. It becomes even harder when the data is not stored in a way that can be easily analyzed.

Today’s technology allows people to analyze huge amounts of data in whatever form. Sophisticated software can identify patterns and relationships between millions of pieces of information that provide better insight into a subject. This is commonly referred to as “big data” analytics.

Don't get overwhelmed by these terms or the complexity of the algorithms used to analyze data. Just remember that the objective is to use data so you and your agency can make better decisions. Here are the key steps to improve your agency's performance:

Step 1:  Understand what you have

Your agency contains a treasure trove of information about your existing clients and potential customers.

Before you can even begin to run a data analytics program, spend time understanding the data you already collect. Start by creating a spreadsheet with all of the data you collect when you onboard a new client — for example, birthdate, home and work address.

Add information you collect as part of the underwriting process. For example, if you write a BOP policy for a client, capture all the additional data an insurer needs to evaluate the risk — the number of employees, store locations and industry.

When this spreadsheet is completed, you will discover the sheer volume of data you already collect about your clients.

Step 2: Understand what you want

Who are my most profitable clients? Are clients more profitable if I write both their commercial and personal lines insurance? How many policies per household do I need to maintain a high retention rate? How can I best target new clients? What type of people are my best referral sources? What marketing programs generate the best leads?

If you think you know the answer to these questions because you've asked them yourself, think again. Most agency owners base their answer on individual experience. That's no longer good enough. Insurance sales and marketing has transformed from an art to a science.

While the data you collect is extremely valuable, data analytics tools also allow you to incorporate outside data into your analysis. What information would you like to have about an existing client or a potential customer? What information would you like to know about a certain area or region?

Identify your “data gaps” — information you don't have but would like to have about a client or a prospect. This might include their net worth, whether they own another home or their business affiliations.  Consider any information you would like to have about a specific geographic area or other external information that would be helpful in allowing you to attract and retain clients.

Capturing all of this additional “outside” data is beyond the capability of any individual agency. But today there are companies that do just that. Find one that offers subscription- or transaction-based solutions, with little or no start-up costs, that are easily accessible by using their secure website. Find a platform you can use any time to plug in or access the data you want.

The data relationships that you build will allow you to create a strategic advantage. Stay away from cookie-cutter solutions that just provide “answers” to data questions. They don't allow you to differentiate the results of the data analysis.

Step 3: Put the data to work

Does your agency management system have a data analytics feature or tool? If it does, subscribe to it. If it doesn’t, demand that the vendor offer such a tool.

If your agency management system doesn't have a data analytics tool, reach out to the insurance company you write a lot of business with and ask if you can partner with them on a data analytics project. Offer to share your information if they will analyze your book of business. Make sure you play a key role in defining the data to be analyzed, and most importantly make sure you define the hypothesis or data relationship you are looking to uncover.

Take action

Today, customer acquisition and retention takes place in real time, or close to it. The more information you have about current and potential customers, the better you will be able to address their needs when and where they want it. That's why you need to embrace data analytics — it gives you the information you need, when you need it.

If you are like most agencies, you’ve already done the hard part by getting rid of your paper files and moving to an electronic agency management system platform. Now you need to start using your data.  You have a great opportunity to become a sophisticated marketer and drive better performance and growth out of your agency.

What are you waiting for?

Make Your Data a Work-in-Process Tool

Heard recently: “Our organization has lots of analytics, but we really don’t know what to do with them.”

This is a common dilemma. Analytics (data analysis) are abundant. They are presented in annual reports and published in colorful graphics. But too often the effort ends there. Nice information, but what can be done with it? 

The answer is: a lot. It can change operations and outcomes, but only if it is handled right. A key is producing an analytics delivery system that is self-documenting.

Data evolution

Obviously, the basic ingredient for analytics is data. Fortunately, the last 30 years have been primarily devoted to data gathering.

Over that time, all industries have evolved through several phases in data collection and management. Mainframe and minicomputers produced data, and, with the inception of the PC in the '80s, data gathering became the business of everyone. Systems were clumsy in the early PC years, and there were significant restrictions to screen real estate and data volume. Recall the Y2K debacle caused by limiting year data to two characters.

Happily for the data-gathering effort, progress in technology has been rapid. Local and then wide area networks became available. Then came the Internet, along with ever more powerful hardware. Amazingly, wireless smartphones today are far more powerful computers than were the PCs of the '80s and '90s. Data gathering has been successful.

Now we have truckloads of data, often referred to as big data. People are trying to figure out how to handle it. In fact, a whole new industry is developing around managing the huge volumes of data. Once big data is corralled, analytic possibilities are endless.

The workers’ compensation industry has collected enormous volumes of data — yet little has been done with analytics to reduce costs and improve outcomes.

Embed analytic intelligence

The best way to apply analytics in workers’ compensation is to create ways to translate and deliver the intelligence to the operational front lines, to those who make critical decisions daily. Knowledge derived from analytics cannot change processes or outcomes unless it is embedded in the work  of adjusters, medical case managers and others who make claims decisions.

Consulting graphics for guidance is cumbersome: Interpretation is uneven or unreliable, and the effects cannot be verified.  Therefore, the intelligence must be made easily accessible and specific to individual workers.

Front line decision-makers need online tools designed to easily access interpreted analytics that can direct decisions and actions. Such tools must be designed to target only the issues pertinent to individuals. Information should be specific.

When predictive modeling is employed as the analytic methodology, certain claims are identified as risky. Instead, all claims data should be monitored electronically and continuously. If all claims are monitored for events and conditions predetermined by analytics, no high-risk claims can slip through the cracks. Personnel can be alerted of all claims with risky conditions. 

Self-documenting

The system that is developed to deliver analytics to operations should automatically self-document; that is, keep its own audit trail to continually document to whom the intelligence was sent, when and why. The system can then be expanded to document what action is taken based on the information delivered.

Without self-documentation, the analytic delivery system has no authenticity. Those who receive the information cannot be held accountable for whether or how they acted on it. When the system automatically self-documents, those who have received the information can be held accountable or commended.

Self-documenting systems also create what could be called Additionality. Additionality is the extent to which a new input adds to the existing inputs without replacing them and results in something greater. When the analytic delivery system automatically self-documents guidance and actions, a new layer of information is created. Analytic intelligence is linked to claims data and layered with directed action documentation.

A system that is self-documenting can also self-verify, meaning results of delivering analytics to operations can be measured. Claim conditions and costs can be measured with and without the impact of the analytic delivery system. Further analyses can be executed to measure what analytic intelligence is most effective and in what form and, importantly, what actions generate best results.

The analytic delivery system monitors all claims data, identifies claims that match analytic intelligence and embeds the interpreted information in operations. The data has become a work-in-process knowledge tool while analytics are linked directly to outcomes.

Tackling Underwriting Profitability Head On

For many years, insurance companies built their reserves by focusing on investment strategies. The recent financial crisis changed that: insurers became incentivized to shift their focus as yields became more unpredictable than ever. As insurance carriers looked to the future, they know that running a profitable underwriting operation is critical to their long term stability.

Profitable underwriting is easier said than done. Insurers already have highly competent teams of underwriters, so the big question becomes, “How do I make my underwriting operation as efficient and profitable as possible without creating massive disruptions with my current processes?”

There are three core challenges that are standing in the way:

  • Lack of Visibility: First, the approach most companies take to data makes it hard to see what's really going on in the market and within your own portfolio. Although you may be familiar with a specific segment of the market, do you really know how well your portfolio is performing against the industry, or how volume and profit tradeoffs are impacting your overall performance? Without the combination of the right data, risk models, and tools, you can’t monitor your portfolio or the market at large, and can't see pockets of pricing inadequacy and redundancy.
  • Current Pricing Approach: You know the agents that underwriters engage with every day want you to give them the right price for the right risk, and it's not easy. In fact, it's nearly impossible. Underwriters are often asked to make decisions based on limited industry data and a limited set of risk characteristics that may or may not be properly weighted. As an underwriter reviews submission after submission, you need to make decisions such as, “How much weight do I assign to each of these risk characteristics (severity, frequency, historical loss ratio, governing class, premium size, etc.)?” Imagine how hard it is to do the mental math on each policy and fully understand how the importance of the class code relates to the importance of the historical loss ratio or any other of the most important variables.
  • Inertia: When executives talk about how to solve these challenges around visibility and pricing, most admit they're concerned about how to overcome corporate inertia and institutional bias. The last thing you want to do is lead a large change initiative and end up alienating your agents, your analysts, and your underwriters. What if you could discover pockets of pricing inadequacy and redundancy currently unknown to you? What if you could free your underwriters to do what they do best? And what if you could start in the way that makes the most sense for your organization?

There's a strong and growing desire to take advantage of new sources of information and modern tools to help underwriters make risk selection and pricing decisions. The implementation of predictive analytics, in particular, is becoming a necessity for carriers to succeed in today's marketplace. According to a recent study by analyst firm Strategy Meets Action, over one-third of insurers are currently investing in predictive analytics and models to mitigate against the problems in the market and equip their underwriters with the necessary predictive tools to ensure accuracy and consistency in pricing and risk selection. Dowling & Partners recently published an in-depth study on predictive analytics and said, “Use of predictive modeling is still in many cases a competitive advantage for insurers that use it, but it is beginning to be a disadvantage for those that don't.” Predictive analytics uses statistical and analytical techniques to develop models that enable accurate predictions about future event outcomes. With the use of predictive analytics, underwriters gain visibility into their portfolio and a deeper understanding of their portfolio's risk quality. Plus, underwriters will get valuable context so they understand what is driving an individual predictive score.

Another crucial capability of predictive modeling is the mining of an abundance of data to identify trends, patterns and relationships. By allowing this technology to synthesize massive amounts of data into actionable information, underwriters can focus on what they do best: they can look at the management or safety program of an insured, anything they think is valuable. This is the artisan piece of underwriting. This is that critical human element that computers will never replace. As soon as executives see how seamless it can be for predictive analytics to be integrated into the underwriting process, the issue of overcoming corporate inertia is oftentimes solved.

Just as insurance leaders are exploring new methods to ensure profitability, underwriters are eager to adopt the analytical advancements that will solve the tough problems carriers are facing today. Expecting underwriters to take on today's challenges using yesterday's tools and yesterday's approach to pricing is no longer sustainable. Predictive analytics offers a better and faster method for underwriters to control their portfolio's performance, effectively managing risk and producing better results for an entire organization.