Tag Archives: pricing

When Big Data Can Define Pricing (Part 2)

This is the second part of a two-part series. The first part can be found here. 

Abstract

In the second part of this article, we extend the discourse to a notional micro-economy and examine the impact of diversification and insurance big data components on the potential for developing strategies for sustainable and economical insurance policy underwriting. We review concepts of parallel and distributed algorithmic computing for big data clustering, mapping and resource reducing algorithms.

 

1.0 Theoretical Expansion to a Single Firm Micro-Economy Case

We expand the discourse from part one to a simple theoretical micro-economy, and examine if the same principles derived for the aggregate umbrella insurance product still hold on the larger scale of an insurance firm. In a notional economy with {1…to…N} insurance risks r1,N and policy holders respectively, we have only one insurance firm, which at time T, does not have an information data set θT about dependencies among per-risk losses. Each premium is estimated by the traditional standard deviation principle in (1.1). For the same time period T the insurance firm collects a total premium πT[total] equal to the linear sum of all {1…to…N} policy premiums πT[rN] in the notional economy.

There is full additivity in portfolio premiums, and because of unavailability of data on inter-risk dependencies for modeling, the insurance firm cannot take advantage of competitive premium cost savings due to market share scale and geographical distribution and diversification of the risks in its book of business. For coherence we assume that all insurance risks and policies belong to the same line of business and cover the same insured natural peril – flood, so that the only insurance risks diversification possible is due to insurance risk independence derived from geo-spatial distances. A full premium additivity equation similar to an aggregate umbrella product premium (3.0), extended for the case of the total premium of the insurance firm in our micro-economy, is composed in (9.0)

In the next time period T+1 the insurance firm acquires a data set θT+1 which allows it to model geo-spatial dependencies among risks and to identify fully dependent, partially dependent and fully independent risks. The dependence structure is expressed and summarized in a [NxN] correlation matrix – ρi,N. Traditionally, full independence between any two risks is modeled with a zero correlation factor, and partial dependence is modeled by a correlation factor less than one. With this new information we can extend the insurance product expression (7.0) to the total accumulated premium πT+1[total] of the insurance firm at time T+1

The impacts of full independence and partial dependence, which are inevitably present in a full insurance book of business, guarantee that the sub-additivity principle for premium accumulation comes into effect. In our case study sub-additivity has two related expressions. Between the two time periods the acquisition of the dependence data set θT which is used for modeling and definition of the correlation structure ρi,N provides that a temporal sub-additivity or inequality between the total premiums of the insurance firm can be justified in (10.1).

It is undesirable for any insurance firm to seek lowering its total cumulative premium intentionally because of reliance on diversification. However an underwriting guidelines’ implication could be that after the total firm premium is accumulated with a model taking account of inter-risk dependencies, then this total monetary amount can be back-allocated to individual risks and policies and thus provide a sustainable competitive edge in pricing. The business function of diversification and taking advantage of its consequent premium cost savings is achieved through two statistical operations: accumulating pure flood premium with a correlation structure, and then back-allocating the total firms’ premium down to single contributing risk granularity. A backwardation relationship for the back-allocated single risk and single policy premium π’T+1[rN] can be derived with a standard deviations’ proportional ratio. This per-risk back-allocation ratio is constructed from the single risk standard deviation of expected loss σT+1[rN] and the total linear sum of all per-risk standard deviations  in the insurance firm’s book of business.

From the temporal sub-additivity inequality between total firm premiums in (10.1) and the back-allocation process for total premium  down to single risk premium in (11.0), it is evident that there are economies of scale and cost in insurance policy underwriting between the two time periods for any arbitrary single risk rN. These cost savings are expressed in (12.0).

In our case study of a micro economy and one notional insurance firms’ portfolio of one insured peril, namely flood, these economies of premium cost are driven by geo-spatial diversification among the insured risks. We support this theoretical discourse with a numerical study.

2.0 Notional Flood Insurance Portfolio Case Study

We construct two notional business units each containing ten risks, and respectively ten insurance policies. The risks in both units are geo-spatially clustered in high intensity flood zones – Jersey City in New Jersey – ‘Unit NJ’ and Baton Rouge in Louisiana – ‘Unit BR’. For each business unit we perform two numerical computations for premium accumulation under two dependence regimes. Each unit’s accumulated fully dependent premium is computed by equation (9.0). Each unit’s accumulated partially dependent premium, modeled with a constant correlation factor of 0.6 (60%), between any two risks, for both units is computed by equation (10.0). The total insurance firm’s premium under both cases of full dependencies and partial dependence is simply a linear sum – ‘business unit premiums’ roll up to the book total.

In all of our case studies we have focused continuously on the impact of measuring geo-spatial dependencies and their interpretation and usability in risk and premium diversification. For the actuarial task of premium accumulation across business units, we assume that the insurance firm will simply roll – up unit total premiums, and will not look for competitive pricing as a result of diversification across business units. This practice is justified by underwriting and pricing guidelines being managed somewhat autonomously by geo-admin business unit, and premium and financial reporting being done in the same manner.

In our numerical case study we prove that the theoretical inequality (10.1), which defines temporal subadditivity of premium with and without dependence modeled impact is maintained. Total business unit premium computed without modeled correlation data and under assumption of full dependence  always exceeds the unit’s premium under partial dependence computed with acquired and modeled correlation factors.

This justifies performing back-allocation in both business units, using procedure (11.0), of the total premium  computed under partial dependence. In this way competitive cost savings can be distributed down to single risk premium. In table 4, we show the results of this back-allocation procedure for all single risks in both business units:

 

For each single risk we observe that the per-risk premium inequality (12.0) is maintained by the numerical results. Partial dependence, which can be viewed as the statistical – modeling expression of imperfect insurance risk diversification proves that it could lead to opportunities for competitive premium pricing and premium cost savings for the insured on a per-risk and per-policy cost savings.

3.0 Functions and Algorithms for Insurance Data Components

3.1 Definition of Insurance Big Data Components

Large insurance data component facilitate and practically enable the actuarial and statistical tasks of measuring dependencies, modeled loss accumulations and back-allocation of total business unit premium to single risk policies. For this study our definition of big insurance data components covers historical and modeled data at high geospatial granularity, structured in up to one million simulation maps. For modeling of a single (re)insurance product a single map can contain a few hundred historical, modeled, physical measure data points. At the large book of business or portfolio simulation, one map may contain millions of such data points. Time complexity is another feature of big data. Global but structured and distributed data sets are updates asynchronously and oftentimes without a schedule, depending on scientific and business requirements and computational resources. Thus such big data components have a critical and indispensable role in defining competitive premium cost savings for the insureds, which otherwise may not be found sustainable by the policy underwriters and the insurance firm.

3.2 Intersections of Exposure, Physical and Modeled Simulated data sets

Fast compute and big data platforms are designed to provide various geospatial modeling and analysis tasks. A fundamental task is the projection of an insured exposure map and computing its intersection with multiple simulated stochastic flood intensity maps and geo-physical properties maps containing coastal and river banks elevations and distances to water bodies. This particular algorithm performs spatial cashing and indexing of all latitude and longitude geo-coded units and grid-cells with insured risk exposure and modeled stochastic flood intensity. Geo-spatial interpolation is also employed to compute and adjust peril intensities to distances and geo-physical elevations of the insured risks.

3.3 Reduction and Optimization through Mapping and Parallelism

One relevant definition of Big Data to our own study is datasets that are too large and too complex to be processed by traditional technologies and algorithms. In principle moving data is the most computationally expensive task in solving big geo-spatial scale problems, such as modeling and measuring inter-risk dependencies and diversification in an insurance portfolio. The cost and expense of big geo-spatial solutions is magnified by large geo-spatial data sets typically being distributed across multiple hard physical computational environments as a result of their size and structure. The solution is distributed optimization, which is achieved by a sequence of algorithms. As a first step a mapping and splitting algorithm will divide large data sets into sub-sets and perform statistical and modeling computations on the smaller sub-sets. In our computational case study the smaller data chunks represent insurance risks and policies in geo-physically dependent zones, such as river basins and coastal segments. The smaller data sets are processed as smaller sub-problems in parallel by assigned appropriate computational resources. In our model we solve smaller scale and chunked data sets computations for flood intensity and then for modeling and estimating of fully simulated and probabilistic insurance loss. Once the cost effective sub-set operations are complete on the smaller sub-sets, a second algorithm will collect and map together the results of the first stage compute for consequent operations for data analytics and presentation. For single insurance products, business units and portfolios an ordered accumulation of risks is achieved via mapping by scale of the strength or lack thereof their dependencies. Data sets and tasks with identical characteristics could be grouped together and resources for their processing significantly reduced by avoiding replication or repetition of computational tasks, which we have already mapped and now can be reused. The stored post-analytics, post-processed data could also be distributed on different physical storage capacities by a secondary scheduling algorithm, which intelligently allocates chunks of modeled and post-processed data to available storage resources. This family of techniques is generally known as MapReduce.

3.4 Scheduling and Synchronization by Service Chaining

Distributed and service chaining algorithms process geo-spatial analysis tasks on data components simultaneously and automatically. For logically independent processes, such as computing intensities or losses on uncorrelated iterations of a simulation, service chaining algorithms will divide and manage the tasks among separate computing resources. Dependencies and correlations among such data chunks may not exist because of large geo-spatial distances, as we saw in the modeling and pricing of our cases studies. Hence they do not have to be accounted for computationally and performance improvements are gained. For such cases both input data and computational tasks can be broken down to pieces and sub-tasks respectively. For logically inter-dependent tasks, such as accumulations of inter-dependent quantities such as losses in geographic proximity, chaining algorithms automatically order the start and completion of dependent sub-tasks. In our modeled scenarios, the simulated loss distributions of risks in immediate proximity are accumulated first, where dependencies are expected to be strongest. A second tier of accumulations for risks with partial dependence and full independence measures is scheduled for once the first tier of accumulations of highly dependent risks is complete. Service chaining methodologies work in collaboration with auto-scaling memory algorithms, which provide or remove computational memory resources, depending on the intensity of modeling and statistical tasks. Challenges still are significant in processing shared data structures. An insurance risk management example, which we are currently developing for our a next working paper, would be pricing a complex multi-tiered product, comprised of many geo-spatially dependent risks, and then back-allocating a risk metric, such as tail value at risk down to single risk granularity. On the statistical level this back-allocation and risk management task involves a process called de-convolution or also component convolution. A computational and optimization challenge is present when highly dependent and logically connected statistical operations are performed with chunks of data distributed across different hard storage resources. Solutions are being developed for multi-threaded implementations of map-reduce algorithms, which address such computationally intensive tasks. In such procedures the mapping is done by task definition and not directly onto the raw and static data.

Some Conclusions and Further Work

With advances in computational methodologies for natural catastrophe and insurance portfolio modeling, practitioners are producing increasingly larger data sets. Simultaneously single product and portfolio optimization techniques are used in insurance premium underwriting, which take advantage of metrics in diversification and inter-risk dependencies. Such optimization techniques significantly increase the frequency of production of insurance underwriting data, and require new types of algorithms, which can process multiple large, distributed and frequently updated sets. Such algorithms have been developed theoretically and now they are entering from a proof of concept phase in the academic environments to implementations in production in the modeling and computational systems of insurance firms.

Both traditional statistical modeling methodologies such as premium pricing, and new advances in definition of inter-risk variance-covariance and correlation matrices and policy and portfolio accumulation principles, require significant data management and computational resources to account for the effects of dependencies and diversification. Accounting for these effects allows the insurance firm to support cost savings in premium value for the insurance policy holders.

With many of the reviewed advances at present, there are still open areas for research in statistical modeling, single product pricing and portfolio accumulation and their supporting optimal big insurance data structures and algorithms. Algorithmic communication and synchronization cost between global but distributed structured and dependent data is expensive. Optimizing and reducing computational processing cost for data analytics is a top priority for both scientists and practitioners. Optimal partitioning and clustering of data, and particularly so of geospatial images, is one other active area of research.

analytics

Analytics and Survival in the Data Age

Do I really mean survival? I do. There’s not a bit of exaggeration there.

I firmly believe—based on extensive qualitative and quantitative research and the many interviews and discussions I’ve conducted with insurance industry leaders—that analytics represent the industry’s best path to success and survival in a rapidly transforming world.

Certainly, virtually every insurance carrier is using analytics in one or more parts of the enterprise—but it is not nearly being used to its full potential. Very few carriers can honestly claim to have a data-conscious culture or say they are applying innovative modeling techniques.

What the industry needs now are well-defined strategies and tactics for immediate implementation to enhance the customer experience (including claims), increase accuracy in underwriting and pricing, optimize operations and boost profitability. Simply put, insurers need to immediately implement a meaningful and robust analytics strategy.

That is why we are holding the 3rd Annual Insurance Analytics USA Summit (March 14-15, Chicago), to provide insights into how to transform the way an organization uses analytics. The lessons from experts about how to prepare the organization for analytics success include:

  • Become an analytics powerhouse: Gain executive buy-in for analytics implementation, build a team to use effective analytics to solve critical business challenges and create a culture of data-centricity throughout the organization.
  • Keep all eyes on the customer: Develop a 360-degree view of the customer; integrate data from disparate sources across the organization to develop a more thorough view of the customer; glean insights for application through underwriting, pricing, marketing, claims, resource management and fighting fraud; design and deliver an exceptional customer experience; and effectively use segmentation and personalization to improve customer lifetime value.
  • Effectively use new and external data: Drive actionable insights from the explosion of big data and new data sources, including telematics, social media and texts.
  • Transform your product lines using analytics: Embed analytics throughout underwriting and pricing to optimize products and improve profitability at each stage of the value chain through accurate risk assessment.

This summit is a must-attend event for executives who are responsible for insurance analytics strategies from insurance carriers (P&C, commercial, specialty, health and life) as well as executives and others responsible for virtually any part of the insurance enterprise.

I hope you’ll join me and 250-plus industry executives at the Insurance Analytics Summit.

The 2 New Realities Because of Big Data

I have some bad news. There are no longer any easy or obvious niches of sustained, guaranteed profits in insurance. In today’s environment of big data and analytics, all the easy wins are too quickly identified, targeted and brought back to par. If you’ve found a profitable niche, be aware that the rest of the industry is looking and will eventually find it, too.

Why? The industry has simply gotten very good at knowing what it insures and being able to effectively price to risk.

Once upon a time, it was sufficient to rely on basic historical data to identify profitable segments. Loss ratio is lower for small risks in Wisconsin? Let’s target those. Today, however, all of these “obvious” wins stand out like beacons in the darkness.

To win in a game where the players have access to big data and advanced analytics, carriers should consider two new realities:

  • You can’t count on finding easy opportunities down intuitive paths. If it’s easy and intuitive, you can bet that everyone else will eventually find it, too.
  • Sustainable opportunities lie in embracing the non-obvious and the counter-intuitive: finding multivariate relationships between variables, using data from novel sources and incorporating information from other coverages.

Just knowing what you insure is only the start. The big trick is putting new information to good use. How can carriers translate information on these new opportunities into action? In particular, how can carriers better price to risk?

We see two general strategies that carriers are using in pricing to risk:

  • Put risks into categories based on predicted profitability level
  • Put risks into categories based on predicted loss

The difference appears subtle at first glance. Which approach a given carrier will take is driven by its ability to employ flexible pricing. As we will now explore, it’s possible for carriers to implement risk-based pricing in both price-constrained and flexible-rate environments.

Predicting Profitability: Triage Model

In the first strategy, carriers evaluate their ability to profitably write a risk using their current pricing structure. This strategy often prevails where there are constraints on pricing flexibility, such as regulatory constraints, and it allows a carrier to price to risk, even when the market-facing price on any given risk is fixed.

The most common application here is a true triage model: Use the predicted profitability on a single risk to determine appetite. Often, the carrier will translate a model score to a “red/yellow/green” score that the underwriter (or automated system) uses to guide her evaluation of whether the risk fits the appetite. The triage model is used to shut off the flow of unprofitable business by simply refusing to offer coverage at prices below the level of profitability.

A triage model can also be implemented as an agency-facing tool. When agents get an indication (red/yellow/green again), they start to learn what the carrier’s appetite will be and are more likely to send only business that fits the appetite. This approach has the added benefit of reducing underwriting time and expense for the carrier; the decline rate drops, and the bind/quote rate rises when the agents have more visibility into carrier appetite.

A final application carriers are using is in overall account evaluation. It may be that a carrier has little or no flexibility on workers’ compensation prices, but significant pricing flexibility on pricing for the business owners policy (BOP) cover. By knowing exactly how profitable (or unprofitable) the WC policy will be at current rates, the carrier can adjust price on the BOP side to bring the entire account to target profitability.

Predicting Loss: Pricing Model

If a carrier has pricing flexibility, pricing to risk is more straightforward: Simply adjust price on a per-risk basis. That said, there are still several viable approaches to individual risk pricing. Regardless of approach, one of the key problems these carriers must address is the disruption that inevitably follows any new approach to pricing, particularly on renewal business.

The first, and least disruptive, approach is to use a pricing model exclusively on new business opportunities. This allows the carrier to effectively act as a sniper and take over-priced business from competitors. This is the strategy employed by several of the big personal auto carriers in their “switch to us and save 12%” campaigns. Here we see “know what you insure” being played out in living color; carriers are betting that their models are better able to identify good risks, and offer better prices, than the pricing models employed by the rest of the market.

Second, carriers can price to risk by employing a more granular rate structure. This is sometimes referred to as “tiering” – the model helps define different levels of loss potential, and those varying levels are reflected in a multi-tiered rate plan. One key advantage here is that this might open some new markets and opportunities not in better risks, but in higher-risk categories. By offering coverage for these higher-cost risks, at higher rates, the carrier can still maintain profitability.

Finally, there is the most dramatic and potentially most disruptive strategy: pricing every piece of new and renewal business to risk. This is sometimes called re-underwriting the book. Here, the carrier is putting a lot of faith in the new model to correctly identify risk and identify the correct price for all risks. It’s very common in this scenario for the carrier to place caps on a single-year price change. For example, there may be renewals that are indicated at +35% rate, but annual change will be limited to +10%. Alternatively, carriers may not take price at all on renewal accounts, unless there are exposure changes or losses on the expiring policy.

Know What You Insure

Ultimately, the winners in the insurance space are the carriers that best know what they insure. Fortunately, in an environment where big data is becoming more available, and more advanced analytics are being employed, it’s now possible for most carriers to acquire this knowledge. Whether they’re using this knowledge in building strategy, smarter underwriting or pricing to risk, the results are the same: consistent profitability.

Sometimes there are pricing constraints that would, at first glance, make effectively pricing to risk challenging. As we have discussed, there are still some viable approaches for carriers facing price inflexibility. Even for carriers with unlimited price flexibility, pricing to risk isn’t as easy as simply applying a model rate to each account; insurers must take care to avoid unnecessary price disruption. We’ve discussed several approaches here, as well.

Effectively pricing to risk gives carriers the opportunity to win without relying on protecting a secret, profitable niche. In the end, this will give them the ability to profit in multiple markets and multiple niches across the entire spectrum of risk quality.

How to Avoid Commoditization

How can a company liberate itself from the death spiral of product commoditization?

Competing on price is generally a losing proposition—and an exhausting way to run a business. But when a market matures and customers start focusing on price, what’s a business to do?

The answer, as counterintuitive as it may seem, is to deliver a better customer experience.

It’s a proposition some executives reject outright. After all, a better customer experience costs more to deliver, right? How on earth could that be a beneficial strategy for a company that’s facing commoditization pressures?

Go From Commodity to Necessity

There are two ways that a great customer experience can improve price competitiveness, and the first involves simply removing yourself from the price comparison arena.

Consider those companies that have flourished selling products or services that were previously thought to be commodities: Starbucks and coffee, Nike and sneakers, Apple and laptops. They all broke free from the commodity quicksand by creating an experience their target market was willing to pay more for.

They achieved that, in part, by grounding their customer experience in a purpose-driven brand that resonated with their target market.

Nike, for example, didn’t purport to just sell sneakers; it aimed to bring “inspiration and innovation to every athlete in the world.” Starbucks didn’t focus on selling coffee; it sought to create a comfortable “third place” (between work and home) where people could relax and decompress. Apple’s fixation was never on the technology but rather on the design of a simple, effortless user experience.

But these companies also walk the talk by engineering customer experiences that credibly reinforce their brand promise (for example, the carefully curated sights, sounds and aromas in a Starbucks coffee shop or the seamless integration across Apple devices).

The result is that these companies create something of considerable value to their customers. Something that ceases to be a commodity and instead becomes a necessity. Something that people are simply willing to pay more for.

That makes their offerings more price competitive—but not because they’re matching lower-priced competitors. Rather, despite the higher price point, people view these firms as delivering good value, in light of the rational and emotional satisfaction they derive from the companies’ products.

The lesson: Hook customers with both the mind and the heart, and price commoditization quickly can become a thing of the past.

Gain Greater Pricing Latitude

Creating a highly appealing brand experience certainly can help remove a company from the morass of price-based competition. But the reality is that price does matter. While people may pay more for a great customer experience, there are limits to how much more.

And so, even for those companies that succeed in differentiating their customer experience, it remains important to create a competitive cost structure that affords some flexibility in pricing without crimping margins.

At first blush, these might seem like contradictory goals: a better customer experience and a more competitive cost structure. But the surprising truth is that these two business objectives are actually quite compatible.

A great customer experience can actually cost less to deliver, thanks to a fundamental principle that many businesses fail to appreciate: Broken or even just unfulfilling customer experiences inevitably create more work and expense for an organization.

That’s because subpar customer interactions often trigger additional customer contacts that are simply unnecessary. Some examples:

  • An individual receives an explanation of benefits (EOB) from his health insurer for a recent medical procedure. The EOB is difficult to read, let alone interpret. What does the insured do? He calls the insurance company for clarification.
  • A cable TV subscriber purchases an add-on service, but the sales representative fails to fully explain the associated charges. When the subscriber’s next cable bill arrives, she’s unpleasantly surprised and believes an error has been made. She calls the cable company to complain.
  • A mutual fund investor requests a change to his account. The service representative helping him fails to set expectations for a return call. Two days later, having not heard from anyone, what does the investor do? He calls the mutual fund company to follow up on the request.
  • A student researching a computer laptop purchase on the manufacturer’s website can’t understand the difference between two closely related models. To be sure that he orders the right one for his needs, what does he do? He calls the manufacturer.
  • An insurance policyholder receives a contractual amendment to her policy that fails to clearly explain, in plain English, the rationale for the change and its impact on her coverage. What does the insured do? She calls her insurance agent for assistance.

In all of these examples, less-than-ideal customer experiences generate additional calls to centralized service centers or field sales representatives. But the tragedy is that a better experience upstream would eliminate the need for many of these customer contacts.

Every incoming call, email, tweet or letter drives real expense—in service, training and other support resources. Plus, because many of these contacts come from frustrated customers, they often involve escalated case handling and complex problem resolution, which, by embroiling senior staff, managers and executives in the mess, drive the associated expense up considerably.

Studies suggest that at most companies, as many as a third of all customer contacts are unnecessary—generated only because the customer had a failed or unfulfilling prior interaction (with a sales rep, a call center, an account statement, etc.).

In organizations with large customer bases, this easily can translate into hundreds of thousands of expense-inducing (but totally avoidable) transactions.

By inflating a company’s operating expenses, these unnecessary customer contacts make it more difficult to price aggressively without compromising margins.

If, however, you deliver a customer experience that preempts such contacts, you help control (if not reduce) operating expenses, thereby providing greater latitude to achieve competitive pricing.

Putting the Strategy to Work

If your product category is devolving into a commodity (a prospect that doesn’t require much imagination on the part of insurance executives), break from the pack and increase your pricing leverage with these two tactics:

  • Pinpoint what’s really valuable to your customers.

Starbucks tapped into consumers’ desire for a “third place” between home and work—a place for conversation and a sense of community. By shaping the customer experience accordingly (and recognizing that the business was much more than just a purveyor of coffee), Starbucks set itself apart in a crowded, commoditized market.

Insurers should similarly think carefully about what really matters to their clientele and then engineer a product and service experience that capitalizes on those insights. Commercial policyholders, for example, care a lot more about growing their business than insuring it. Help them on both counts, and they’ll be a lot less likely to treat you as a commodity supplier.

  • Figure out why customers contact you.

Apple has long had a skill for understanding how new technologies can frustrate rather than delight customers. The company used that insight to create elegantly designed devices that are intuitive and effortless to use. (Or, to invoke the oft-repeated mantra of Apple co-founder Steve Jobs, “It just works.”)

Make your customer experience just as effortless by drilling into the top 10 reasons customers contact you in the first place. Whether your company handles a thousand customer interactions a year or millions, don’t assume they’re all “sensible” interactions. You’ll likely find some subset that are triggered by customer confusion, ambiguity or annoyance—and could be preempted with upstream experience improvements, such as simpler coverage options, plain language policy documents or proactive claim status notifications.

By eliminating just a portion of these unnecessary, avoidable interactions, you’ll not only make customers happier, you’ll make your whole operation more efficient. That, in turn, means a more competitive cost structure that can support more competitive pricing.

Whether it’s coffee, sneakers, laptops or insurance, every product category eventually matures, and the ugly march toward commoditization begins. In these situations, the smartest companies recognize that the key is not to compete on price but on value.

They focus on continuously refining their brand experience—revealing and addressing unmet customer needs, identifying and preempting unnecessary customer contacts.

As a result, they enjoy reduced price sensitivity among their customers, coupled with a more competitive cost structure. And that’s the perfect recipe for success in a crowded, commoditized market.

This article first appeared on carriermanagement.com.