Download

3-Point Plan for an Innovation Portfolio

The challenge is that organizations are overwhelmed with more ideas than they can sort out, much less pursue.

One lament I often hear when I advise large company executives on the need to “Think Big” is that their biggest innovation challenge is not thinking big—it is thinking too much. Purportedly great ideas come from the front lines where the organization interacts with products and customers. They come from technology or marketing wizards keeping a sharp eye on disruptive market trends. They come from executives and board members grappling with questions at the organization’s strategic horizon. The challenge is that organizations are overwhelmed with more ideas than they can sort out, much less pursue.

Perhaps the best advice on how to deal with the challenge of too many ideas comes from Peter Drucker, who offered this general principle:

Innovation begins with the analysis of opportunities. The search has to be organized, and must be done on a regular, systematic basis.” Don’t subscribe to romantic theories of innovation that depend on “flashes of genius.”

Rather than relying on randomness or organizational influence to dictate which ideas find a receptive ear, here is a three-point plan for initiating a systematic process for uncovering, assessing and scaling the best ideas.

1. Inventory Opportunities Start by casting a wide net. For example, sponsor a series of innovation contests and workshops to educate, build alignment and uncover potentially good ideas. Hold scenario planning sessions with senior executives and board members to explore both incremental and disruptive future business scenarios. Questions to ask might include:

  • Can you augment your customer interfaces to reveal customer preferences and to customize the customer experience, as Amazon and Netflix do?
  • Are there opportunities to better utilize the big data being generated by your business processes, including customer, operational or performance data, for innovation?
  • How might you reimagine key business, customer, and competitive issues if you could start with a clean sheet of paper?
  • How do the six disruptive technologies affecting other information intensive companies apply to you?
  • What extreme competitive threats, i.e., doomsday scenarios, might new entrants wielding these disruptive technologies pose to your organization?

Opportunities should include both continuous and discontinuous innovations. Continuous innovations offer incremental or faster, better, cheaper-type optimizations, such as shedding costs, reducing cycle times and generating incremental revenue. Discontinuous innovations are those that rise to the level of game-changing potential.

2. Develop a Holistic View Using an Innovation Portfolio Next, assess each opportunity based on competitive impact and investment type using the portfolio analysis framework as shown in Figure 1. Figure 1 Figure 1: Portfolio Analysis Framework Competitive impact measures differentiation against what competitors might deploy by the time an idea is launched.

Remember Wayne Gretzky (who famously said he skates to where the puck is going, not to where it is)! A key mistake is evaluating an idea against one’s current internal capabilities, as opposed to where the competition is going. This dimension forces an explicit calculation of an idea’s future potential competitive impact. Investments can be one of three types:

  • Stay in Business investments (SIB) are for basic infrastructure or non-discretionary government mandates. SIB investments should be assessed on how adequately they meet regulatory or technical requirements while minimizing risk and cost.
  • Return on Investment opportunities (ROI) are pursued for predictable, near-term financial returns. Standard measures, such as net present value (NPV), return on equity (ROE) or other well-understood metrics are applicable here.
  • Option-Creating Investments (OCI) are pursued to create business options that might yield killer-app-type opportunities in the future. OCI investments do not yield financial returns directly.  Instead, they build capabilities and learnings that can be translated into future ROI opportunities. Like financial options, OCIs should exhibit high risk and offer tremendously high returns.

After arraying opportunities in the framework, eliminate those that fall outside of acceptable boundaries. For example, companies should not pursue opportunities that, once completed, are already at a disadvantage against the competition.

For the remaining opportunities, develop an initial sizing of investment levels and potential benefits according to each investment category. Filter as appropriate. For example, eliminate ROI opportunities that do not meet standard corporate hurdles rates. Eliminate OCI opportunities that do not exhibit extraordinary option value. Eliminate SIB ideas that do not adequately minimize cost and risk—be very skeptical of SIB opportunities aimed at providing ROI or OCI benefits. Such opportunities should be judged directly as those investments types. 

Figure 2 illustrates how the analysis might look at the end of this stage. Figure 2 Figure 2: Portfolio Analysis Results

3. Balance the Innovation Portfolio In personal investment portfolios, it is important to not place all hopes in one or two investments. The same is true for corporate innovation portfolios. To ensure competitiveness in the near term and in the future, they should include a mix of incremental and disruptive innovations. The right balance and prioritization depends on a company’s investment capabilities and competitive circumstances.

For example, as shown in Figure 3, a market leader might field a portfolio geared toward aggressive growth by enhancing its infrastructure, investing heavily in near-term profitable opportunities and developing a small number of killer app options for sustaining its competitive advantage.  (My experience is that the right number of such options is on the low end of the magic 7, plus or minus two. That is because the limiting factor is senior executive attention, which is very limited, not investment dollars. Market leaders have lots of money to waste, but no project with true killer app potential can succeed without significant senior executive attention.) Figure 3 Figure 3: A Market Leader’s Balanced Portfolio Other illustrative portfolio profiles are shown in Figure 4.

Commodity businesses tend to minimize SIB and OCI investments. Companies that are retooling might emphasize infrastructure and near-term investments and make only minimal investments in future options. Underperforming companies tend to invest in programs that barely achieve competitive parity, or worse, and do little to prepare for the future in any of the three investment categories. Figure 4 Figure 4: Illustrative Portfolio Profiles

* * *

By adopting appropriate financial and competitive metrics and measures for each type of investment, companies avoid planning theatrics where guesses are disguised as rigorous forecasts. This can happen, for example, when infrastructure and other SIB investments are required to demonstrate explicit returns on investment. Or, it can happen when advocates of OCI efforts are required to calculate net present value of very uncertain long-term initiatives. Such forecasts can, of course, be made by savvy proponents. But the analyses are better testaments to rhetorical and spreadsheet skills than certainties about the future.

At the end of this three-step process, companies should have a prioritized and staged investment plan that represents a coordinated enterprise innovation strategy and follows the think big, start small and learn fast innovation road map. Achieving an adequate understanding of the entire landscape of possibilities facilitates and encourages thinking big. Continuing management of the innovation portfolio provides clear criteria for evaluating other big ideas as they come up. It also demands the discipline of starting small and learning fast in the pursuit of disruptive innovations that will shape the company’s future strategic prospects.

Is Price Optimization Really an Evil Idea?

No, because customers benefit, too. Most insurers should -- and can -- get to this next level of sophistication on pricing.

There seems to be a lot of misperception about what price optimization really is, largely driven by publicized assumptions that it will only serve the best interests of the company and hurt the consumer. Basically, price optimization boils down to applying analytics to available information to develop more quantitative and targeted pricing policies and processes. Price optimization is currently used extensively in many industries. The benefits and rewards to both the companies and the customers are plenty, with the customer rewards being highly visible. Through the use of price optimization, retailers are able to present highly personalized and appealing offers to their customers based on past shopping and buying patterns coupled with predictions of customer wants and needs. Retailers are able to keep their best customers informed of sales and special offers that are of real value to them. The travel industry uses price optimization to manage profitability and, equipped with insights that give them the ability to fine-tune the metrics, are able to offer very attractive options to travelers. Capacity that would otherwise have gone unused attracts happy customers and often brings them back. For the insurance industry, it is important to understand that price optimization does not replace risk-based pricing; rather, optimization is the next level of sophistication for risk-based pricing. With price optimization, insurers are able to explore product options and then find an optimal balance point among all options and constraints within complicated rating orders and large sets of data. This makes it possible to construct and present more appealing, more targeted product and service offerings. Personalized offerings can be shaped to meet personalized needs. The laws of large numbers can be optimized for the individual situation. Today, price optimization is being used most often by insurers in personal lines -- in many cases, those that are trying to innovate and capitalize on the next wave of analytics. The goal is to improve the bottom line and increase market share by using newly available types of analytics, models, tools and methods. These insurers don't see price optimization as an independent exercise; they view it as a key part of the business's journey to the next level of maturity. Recognizing that rate changes and the resulting customer reactions have an immediate and very significant tie to new business and renewals, and understanding that informed consumers expect offers that meet their personalized requirements, insurers see optimization as a journey that is essential for profitable growth in personal lines. It is only a matter of time before the principles involved are applied to commercial risk pricing, especially for smaller and middle markets. As the comfort level increases and experience with the insights and tools matures, price optimization will likely become a significant aspect of the collaboration and negotiation process for mid-market and even large, complex cases. The business benefits of price optimization are undeniable. Improved insights give insurers greater ability to achieve specific financial objectives for growth and profit. Fortified with intelligence, including a better understanding of customer demand and buying behaviors by segment, insurers can make business decisions and tradeoffs based on agreed-upon metrics rather than emotion and historical understandings that sometimes morph over the years. While the benefits are clear, the reality is that price optimization is a complex endeavor. It involves deep analytics, advanced business intelligence and ready access to complete and accurate data. Many companies are spending lots of time and resources building sophisticated models of loss cost, expenses and customer demand, incorporating competitive position and market data. Price optimization brings them all together, aligning to specific business goals and the regulatory framework, enabling companies to clearly understand the trade-offs between various pricing strategies. The extent of the use of price optimization in the insurance industry is small in terms of the number of companies that have implemented optimization or are conducting pilots. It is, however, important to note that price optimization is being adopted by the largest insurance companies -- those that have the most market share -- so the portion of the industry that being affected is significant. It won’t be long before a very large percentage of the premiums being written will be based on rates developed by using advanced analytics capabilities that involve price optimization. In many insurance companies, there are both real and perceived hurdles that impede progress in price optimization. Project capacity is limited, and price optimization does not always make the list of top-priority efforts. For some insurers, there is an inherent cultural resistance to change, particularly when today's models have been delivering growth. Price optimization is complex; it requires special skills -- deep experience in predictive modeling and advanced analytics. Price optimization involves a transformation of the entire pricing process. But the insurers that are embracing and implementing price optimization are finding ways to overcome these challenges. Obviously, most national insurers have the volume of data that is necessary to get pricing optimization right, but they can also be burdened with an overwhelming amount of data that originates from multiple sources and isn’t always clean and consistent. In contrast, it is not unusual for regional insurers to think they don't have enough data. The reality is that most insurers do have more than enough data to build and use customer demand models. Price optimization will work for more insurers than one might expect. Now is the time to lay the ground work for competing effectively in the long run.

Monique Hesseling

Profile picture for user moniquehesseling

Monique Hesseling

Monique Hesseling is a partner at Strategy Meets Action, focused on developing effective roadmaps and helping companies expand their business opportunities. Recognized internationally for her knowledge and expertise, she is assisting SMA customers across the insurance ecosystem.

ICD-10 Delay Creates Workers' Comp Mess

Pushing implementation back another year raises costs and creates confusion for healthcare providers, billers and payers alike.

Right now, we would be launching the long-anticipated shift from ICD-9 to ICD-10 -- except that the Centers for Medicare and Medicaid Services (CMS) was ordered to make yet another change to the deadline. Instead of taking effect Oct. 1, 2014, the newest deadline for ICD-10 is Oct. 1, 2015. The inevitable is put off for another year. Delaying implementation of ICD-10 is a relief for some but grinding for others. Without a doubt, continued delays significantly affect costs and benefits for the healthcare system. According to Michele Hibbert-Iacobacci, vice president of information management and support at Mitchell International, “On March 31, 2014, the ICD-10-CM/PCS (International Classification of Diseases -- 10th Revision, Clinical Modification and Procedural Coding System) implementation was delayed in the United States [because] the Senate approved a bill (H.R. 4302). This update to the obsolete ICD-9-CM/PCS was a requirement in the Health Insurance Portability and Accountability Act (HIPAA) for all covered entities. Workers’ compensation has been excluded as an industry that is not covered under HIPAA; however, the providers submitting the medical bills to workers’ compensation payers are covered entities. By proxy, the workers’ compensation industry needed to prepare to accept ICD-10-CM/PCS by the implementation date of Oct. 1, 2014, and the majority of payers and vendors were ready to process bills by that date.” The move from ICD-9 to ICD-10 reflects substantial advances in medicine that have occurred during the past three decades. ICD-9 includes 17,000 diagnostic codes, whereas ICD-10 has 155,000 codes, reflecting much more detail and differentiation in diagnoses. The result of the expanded and updated coding will enhance definition of diseases and injuries and make payments more accurate. Yet continued delays have placed time and cost burdens on billers, suppliers and payers throughout the healthcare and insurance industries. Organizations have spent millions of dollars on training personnel for the upgrade; now, they have spend more on refresher courses and on training for new people who are replacing trained personnel who have left. The delays also create a challenge because ICD-10 codes will be used sporadically before and after the deadline, requiring handling both sets of codes. There will be those who begin using the new coding early and those who never believed the day for the switch would come. The latter group could lag a long time. Accommodation will be made for old coding and dual coding. Bills will be submitted using either and both. Therefore, decisions must be made regarding payment. Will the paying organization assume the task of converting the codes? Should reimbursement be denied those not in compliance on codes? Systems will need to accommodate both to navigate the transition. The drop-dead date for ICD-10 will come, whether it occurs in October 2015 or later. When the day comes, reimbursement will depend on accurate and timely coding. There are those who are thankful for the delay because they were not ready. They now have time to meet the new deadline. Those who were ready for the launch can now perfect the processes they created. The test for them is to sustain readiness for another year. That is costly. It is also tiring.

Karen Wolfe

Profile picture for user KarenWolfe

Karen Wolfe

Karen Wolfe is founder, president and CEO of MedMetrics. She has been working in software design, development, data management and analysis specifically for the workers' compensation industry for nearly 25 years. Wolfe's background in healthcare, combined with her business and technology acumen, has resulted in unique expertise.

Whistleblower: Fed Defers to Big Banks

Stunning tapes underscore the risk that remains in our financial system -- and the need for 'interactive finance.'

"This American Life" teamed up with ProPublica for a blockbuster story that Federal Reserve regulators defer to mega bank Goldman Sachs on compliance issues. Thanks to whistleblower Carmen Segarra, the report about the culture at the Fed was so explosive that Sen. Elizabeth Warren called for an investigation within 24 hours. The whole mechanics of the story highlight the problems with our current system. But for a whistleblower coming forward, no one would likely learn of the big bank’s conduct or of regulators' deference to it. Once she provided authentic, unimpeachable audio, a compelling broadcast led a legislator to call for an investigation, but any probe may or may not yield  findings of  wrongdoing. The main result seems likely to be publicity for lawmakers, regulators and bankers. All pretty much par for the course, underscoring the concern I expressed in an earlier piece that a lack of control by the Fed could leave banks and markets in the same sort of condition that led to disaster in 2008. These issues are consequential for insurers not least because the industry holds $120 billion in mortgage-backed securities for commercial and multifamily real estate,  $336 billion in collateralized debt obligations (CDOs), commercial mortgage-backed securities (CMBSs) and asset-backed securities (ABSs) and $365 billion in residential mortgage-backed securities, according to the Mortgage Bankers Association and Federal Reserve. The insurance industry relies on these investments for significant portions of its operating profits, so it needs a safe and efficient financial system. A solution is at hand. "Interactive finance" addresses the insurance industry’s transparency needs with large banks by powering real-time monitoring and compliance as it creates efficient markets  and reduces regulatory costs. Marketcore, a firm I advise, is pioneering interactive finance to generate liquidity by rewarding individuals and institutions for revealing information that details risks. Interactive finance crowd-sources market participation by rewarding individuals, organizations and institutions seeking loans, lines of credit or mortgages or negotiating contracts with monetary or strategic incentives. These  rewards are  offered in exchange for risk-detailing, confidence-building disclosures that increase trading volumes. Whether risk takers are a bank, insurance company or counter party, all granters define rewards. A reward can constitute a financial advantage — say, a discount on the cost of information or transaction. The sale of the information more than makes up for the discounted fee. The time-sensitive grant of advantage can actually be directed to specific products, benefiting traders. All this transpires on currently existing electronic displays broadband, multimedia, mobile and interactive information networks and grids. Interactive finance realizes a neutral risk identification and mitigation system with a system architecture that scans and values risks, even down to individual risk elements and their aggregations. As parties and counter parties crowd markets, each revealing specific risk information in return for equally precise and narrowly tailored rewards and incentives, their trading generates fresh data and meta data on risk tolerances in real time and near real time. This data and meta data can then be deployed to provide real-time confidence scoring of risk in dynamic markets. Every element is dynamic, like so many Internet activities and transactions. Interactive finance constantly authenticates risks with constantly refreshing feedback loops. Risk determination permits insureds, brokers and carriers to update risks through “a transparency index. . . based. . . on the quality and quantity of the risk data records.” Through these capabilities, Marketcore technologies connect the specific, individual risk vehicle with macro market data to present the current monetary value of the risk instrument, a transparency index documenting all the risk information about it and information on the comparative financial instruments. Anyone participating receives a comprehensive depiction of certainty, risk, disclosures and value. There will be vastly more efficiency once interactive finance provides timely information that allows for easy monitoring by regulators and lawmakers, provides incentives for compliance by big banks and stimulates efficient markets. There will be no more need for whistleblowers if interactive finance provides timely information that allows for easy monitoring by regulators and lawmakers that forces compliance by big banks and markets.

How to Lower Your Cyber Risk

Looking at application forms for cyber insurance suggests four basic steps that can reduce exposure to data breaches.

As we approach the close of 2014, virtually no one needs to be reminded that cyber liability is real and here to stay. Data breaches and cyber security incidents are on the rise. New York’s attorney general reported that breaches tripled between 2006 and 2013, and, according to a recent study, 43% of companies experienced a breach last year. What are some of the key issues accounting for this increase? First, information is the new oil, and it has value. Stolen financial and medical data can be purchased on the “dark web” and used for identity theft and fraudulent billing. Second, computer networks can be attacked relentlessly by hackers thousands of miles away, with little risk to the hackers. Third, entities are creating and storing more data than ever. It is estimated that the volume of data is doubling every two years, and too many entities have adopted a keep-everything approach to information management. Given this reality, it’s no wonder that sales of cyber insurance are rising. Cyber insurance can fill gaps left by traditional policies and provide a lifeline to entities affected by a breach or security incident. But cyber insurers require prospective insureds to complete detailed applications that address various areas relevant to cyber liability. Among the areas of inquiry are:
  • Records and Information Management -- including identification of the types and volume of sensitive information the company handles. For example, do you handle or store payment card information, intellectual property of others or medical records?
  • Management of Computer Networks -- including security management, intrusion testing, auditing, firewalls, use of third party vendors and encryption.
  • Corporate Policies -- for privacy, information security, use of social media and BYOD (bring your own device), among others. Insurers often ask if the policy was prepared by a qualified attorney and how often it is reviewed and updated. Some insurers require such policies to be attached to the completed application.
  • Employment Issues -- including whether employees go through criminal background checks. Many insurers also ask if the company has a chief privacy officer, chief information officer and chief technology officer.
The following are some basic steps a company can take to better position itself to complete the cyber application and obtain optimal cyber coverage. Locate Your Data You can’t manage and secure information if you don’t know what you have or where it is. Creating a map or inventory of all enterprise information is an invaluable step toward getting your data house in order. Paper records and data stored on inactive media and on mobile devices should not be forgotten. Delete What You Don’t Need It is estimated that between 60% and 70% of stored information has no business value. Keeping all this useless information is not a sustainable business practice. Disposing of data can reduce storage, e-discovery costs and security risks, and improve employee efficiency. Legally defensible deletion of useless information and adoption of a sound record retention and deletion policy are important parts of a successful information management policy. Control Access Entities should permit access to information, particularly sensitive information, on a need-to-know basis. A large number of data breaches result from employee negligence and disgruntled or rogue employees. Restricting access to sensitive data is an important step to mitigating that risk. Improve Policies and Training Depending on business activities, entities should consider adoption of policies that relate to cyber liability, including privacy, record retention and deletion, use of passwords, email and use of social media. Policies should be reviewed by a qualified attorney, updated regularly and enforced. Employee training and re-training is an important component of successful policy implementation. Conducting data breach workshops, where the entity can rehearse its response to a breach incident, can pay big dividends in the event of a breach. Because cyber applications require entities to take a close look at their information management and cyber vulnerabilities, it’s no wonder that a recent Ponemon study found that 62% of surveyed companies report that their ability to deal with security threats improved following the purchase of cyber insurance. Taking the steps outlined above in connection with applying for cyber coverage makes good business sense and can help an entity obtain the best cyber policy to protect itself against growing threats.

Judy Selby

Profile picture for user JudySelby

Judy Selby

Judy Selby is a principal with Judy Selby Consulting LLC and a senior advisor with Hanover Stone Partners LLC. She provides strategic advice to companies and corporate boards concerning insurance, cyber risk mitigation and compliance, with a particular focus on cyber insurance.

Digital Disruption: Coming to P&C Soon?

CEOs may feel safe in a sleepy industry, but, then, Blockbuster didn't see Netflix or RedBox coming, either.

My wife is a project manager who is responsible for business operations at our local high school. She hired some people this summer to process and distribute new textbooks within the school, but they hadn't finished the job and school was about to open, so she needed someone to come in at the last minute and help get the work done. More specifically, someone who would follow her instructions and would not expect to get paid. . .  so I spent a long Saturday with her at the school, schlepping pallets and boxes of new textbooks to the classrooms, getting everything in place in time for the start of the new school year. I wasn't happy with the work (the school was hot, the textbooks heavy) and more than once I thought wistfully about Steve Jobs, who according to biographer Walter Isaacson had targeted the school textbook business as an "$8 billion a year industry ripe for digital destruction." Targeting textbooks seemed like a good idea to me, because not only are they big and heavy and expensive -- they don't update easily, either. Unfortunately, Jobs didn't live long enough to disrupt the textbook industry, but others are on the same path and, selfishly, I wish them well! Check out The Object Formerly Known as the Textbook for an interesting look at how textbook publishers and software companies and educational institutions are jockeying for position as textbooks evolve into courseware. Also, As More Schools Embrace Tablets, Do Textbooks Have a Fighting Chance? takes a look at how the Los Angeles Unified School District — second largest school district in the country — is equipping students with iPads and delivering textbooks digitally in a partnership with giant book publisher Pearson. Harvard professor Clayton Christensen, author of The Innovator's Dilemma, is credited with coming up with the term "disruptive innovation," which he defined as: "a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors." These days, we tend to associate disruptive innovation with a new or improved product or service that surprises the market, especially established, industry-leading competitors and increases customer accessibility while lowering costs.The notion is appealing, and it makes for exciting business adventure tales featuring scrappy, innovative underdogs overcoming entrenched, clueless market leaders. Of course, disruptive innovation has been happening for a long time, even if it was called something else, but lately technology has made it easier and cheaper for upstart firms to take on industries they think are "ripe for digital destruction." There are some who think we've gone too far in adopting the disruption mantra. In her recent article The Disruption Machine, Harvard professor and New Yorker staff writer Jill Lepore squinted hard at disruption theory: "Ever since The Innovator’s Dilemma, everyone is either disrupting or being disrupted. There are disruption consultants, disruption conferences, and disruption seminars. This fall, the University of Southern California is opening a new program: 'The degree is in disruption,' the university announced." By the way, USC's Jimmy Iovine and Andre Young Academy for Arts, Technology and the Business of Innovation is, in fact, opening this year and will focus on critical thinking with plans, according to the academy website, to "...empower the next generation of disruptors and professional thought leaders who will ply their skills in a global area." And, yes, that is Dr. Dre's name on the academy! But there are others who believe we have now entered a decidedly more treacherous innovation environment, one that Josh Linkner in The Road to Reinvention says is forcing companies to systematically and continually challenge and reinvent themselves to survive. His fundamental question is this: "Will you disrupt, or be disrupted?" And Paul Nunes and Larry Downes, who wrote an article for the Harvard Business Review Magazine in 2013 titled Big Bang Disruption (they have a book on the same topic, summarized by Accenture here), warn of a new type of innovation that is more than disruptive -- it's devastating: "A Big Bang Disruptor is both better and cheaper from the moment of creation. Using new technologies...Big Bang Disruptors can destabilize mature industries in record time, leaving incumbents and their supply-chain partners dazed and devastated." Should CEOs be worried? When Mikhail Gorbachev visited Harvard in 2007 and said, “If you don’t move forward, sooner or later you begin to move backward,” he was talking about politics and multilateral nuclear treaties, not companies, but the warning certainly could have been directed at CEOs. That message, refreshed to incorporate the disruptive innovation threats that have emerged since then, seems a bit unsettling: If you run a company and you aren't dedicating resources to continually scanning the marketplace for threats and improving and reinventing your business, if you are instead taking a "business as usual" approach, you are at risk of being marginalized or supplanted by competitors who will bring new products, services, experiences, efficiencies, cost structures and insights to your customers. Maybe not this year, or next year, but sometime soon.  It's not a question of whether it will happen, but when. Thus Linkner's question, restated:  Will you disrupt yourself, or be disrupted by someone else? Of course, some industries, like property casualty insurance, may not be high on anyone's "ripe for digital destruction" list, so maybe there's no need for insurance company CEOs to worry. Except perhaps about Google and Amazon. I keep thinking back to Blockbuster CEO Jim Keyes' comments to The Motley Fool in 2008:  "Neither RedBox nor Netflix are even on the radar screen in terms of competition." You know the rest of the story, which illustrates the real-life consequences of an incumbent underestimating and then becoming "dazed and devastated" by a competitor.

Dean Harring

Profile picture for user DeanHarring

Dean Harring

Dean K. Harring retired in February 2013 as the executive vice president and chief claims officer at QBE North America in New York. He has more than 40 years experience as a claims senior executive with companies such as Liberty Mutual, Commercial Union, Providence Washington, Zurich North America, GAB Robins and CNA.

Is the Big-Name Firm the Best Bet?

Maybe not. Common sense says a company should look for value, as it does with any significant corporate expenditure.

When I moved my securities litigation practice to a regional law firm from "biglaw," I made a bet. I bet that public companies and their directors and officers would be willing to hire securities defense counsel on the basis of value, i.e., the right mix of experience, expertise, efficiency and price -- just as they do with virtually all other corporate expenditures -- and not simply default to a biglaw firm because it is “safer.” My bet certainly was made less risky by the quality of my new law firm (a 135-year-old, renowned firm that has produced past and present federal judges and is full of superior lawyers); by discussions with public company directors, officers and in-house lawyers; by my observations and analyses about the evolving economics of securities litigation defense and settlement; and by my knowledge that I could recruit other talented full-time securities litigators to join me in my new practice.  But I was still making a bet. Well, so far, so good -- my experience has confirmed my belief. So, too, did a recent article titled, “Why Law Firm Pedigree May Be a Thing of the Past,” on the Harvard Business Review Blog Network, reporting on scholarship and survey results indicating that public companies are increasingly willing to hire firms outside of biglaw to handle high-stakes matters. The HBR article frames the issue in colorful terms: "Have you ever heard the saying: 'You never get fired for buying IBM?' Every industry loves to co-opt it; for example, in consulting, you’ll hear: “You never get fired for hiring McKinsey.” In law, it’s often: “You never get fired for hiring Cravath.” But one general counsel we spoke with put a twist on the old saying, in a way that reflects the turmoil and change that the legal industry is undergoing. Here’s what he said: 'I would absolutely fire anyone on my team who hired Cravath.' While tongue in cheek, and surely subject to exceptions, it reflects the reality that there is a growing body of legal work that simply won’t be sent to the most pedigreed law firms, most typically because general counsel are laser-focused on value, namely quality and efficiency." The HBR article reports that a study of general counsels at 88 major companies found that “GCs are increasingly willing to move high-stakes work away from the most pedigreed law firms (think the Cravaths and Skaddens of the world) … if the value equation is right.  (Firms surveyed included companies like Lenovo, Vanguard, Shell, Google, Nike, Walgreens, Dell, eBay, RBC, Panasonic, Nestle, Progressive, Starwood, Intel and Deutsche Bank.)” The article reports on two survey questions. The first question asked, “Are you more or less likely to use a good lawyer at a pedigreed firm (e.g. AmLaw  20 or Magic Circle) or a good lawyer at a non-pedigreed firm for high stakes (though not necessarily bet-the-company) work, assuming a 30% difference in overall cost?” The result: 74% of GCs answered that they are less likely to use a pedigreed firm, and 13% answered the “same.”  Only 13% responded that they are more likely to use a pedigreed firm than other firms. The second question asked, “On average, and based on your experiences, are lawyers at the most pedigreed, 'white shoe' firms more or less responsive than at other firms?” The result:  57% answered that pedigreed firms are less responsive than other firms, and 33% answered they are the “same.”  Only 11% responded that pedigreed firms are more responsive than other firms. The survey results ring true and are reinforced by other recent scholarship and analysis on the issue, including a Wall Street Journal article titled “Smaller Law Firms Grab Big Slice of Corporate Legal Work” and an article featured on www.law.com’s Corporate Counsel blog titled “In-House Counsel Get Real About Outside Firm Value.” As all three articles emphasize, skyrocketing legal fees are a notorious problem. And corporate executives are increasingly becoming attuned to this issue. Indeed, during the in-house counsel panel discussed in the Corporate Counsel article, a general counsel noted that in explaining outside counsel costs to the CEO and CFO of his company, “it’s very, very difficult … to say why someone should [bill] over $1,000 per hour . . . It just doesn’t look good.” The problem is especially acute in securities class action defense, in which the defense is largely dominated by biglaw firms with high billing rates and a highly leveraged structure (i.e. a high associate-to-partner ratio), which tends to result in larger, less-efficient teams. Now, as the economy has forced companies to be more aware of legal costs, including the fact that using a biglaw firm often results in prohibitively high legal fees, it is unsurprising that companies are increasingly turning to midsize firms. According to the WSJ article, midsize firms have increased their market share from 22% to 41% in the past three years for matters that generate more than $1 million in legal bills. Indeed, both Xerox’s general counsel and Blockbuster’s general counsel advocated that companies control legal costs by using counsel in cities with lower overhead costs. Some companies, and many law firms, see securities class actions as a cost-insensitive type of litigation to defend: The theoretical damages can be very large; the lawsuits assert claims against the company’s directors and officers; and the defense costs are covered by D&O insurance. But these considerations rarely, if ever, warrant a cost-insensitive defense. Securities class actions are typically defended and resolved with D&O insurance. D&O insurance limits of liability are depleted by defense costs, which means that each dollar spent on defense costs decreases the amount of policy proceeds available to resolve the case. At the end of a securities class action, a board will very rarely ask, “Why didn’t we hire a more expensive law firm?” Instead, the question will be, “Why did we have to write a $10 million check to settle the case?” Few GCs would want to have to answer:  “because we hired a more expensive law firm than we needed to.” That takes us to the heart of the HBR article: “Do we need to hire an expensive law firm?” After all, in a securities class action, the theoretical damages can be very large, often characterized as “bet the company,” and the fortunes of the company’s directors and officers are theoretically implicated. Certainly, when directors and officers are individually named in a lawsuit, their initial gut reaction may be to turn to biglaw firms regardless of price, if they believe that the biglaw brand name will guarantee them a positive result. Biglaw capitalizes on these fears. But, of course, hiring a biglaw firm does not guarantee a positive result. The vast majority of securities class actions are very manageable. They follow a predictable course of litigation and can be resolved for a fairly predictable amount, regardless of how high the theoretical damages are. And it is exceedingly rare for an individual director or officer to write a check to settle the litigation. Indeed, the biggest practical personal financial risk to an individual director or officer is exhaustion of D&O policy proceeds because defense costs are higher than necessary. Lurking behind these considerations are two central questions: “Aren’t lawyers at biglaw firms better?” and “Don’t I need biglaw resources?” “Aren’t lawyers at biglaw law firms better?” Not necessarily. That’s the main point of the GC survey discussed in the HBR article. To be sure, there are excellent securities litigators at many biglaw firms. But the blanket notion that biglaw securities litigators are more capable than their non-biglaw counterparts is false. And it’s not even a probative question when comparing biglaw lawyers to non-biglaw lawyers who came from biglaw. In the WSJ article, Blockbuster’s general counsel, in explaining why his company often seeks out attorneys from more economical areas of the country, pointed out that many of the attorneys in less expensive firms came from biglaw firms. Many top law school graduates and former biglaw attorneys practice at non-biglaw firms, not because they were not talented enough to succeed at a biglaw firm, but for personal reasons, including: a desire to live in a city other than New York, the Bay Area or Los Angeles; to find work-life balance; to have the freedom to design a better way of defending cases; or to develop legal skills at a faster pace than is usually available at a biglaw firm. There obviously is a baseline amount of expertise and experience that is necessary to handle a case well, and there are a number of non-biglaw lawyers in the group of lawyers who meet that standard. One easy way to judge the quality of firms is by reading recently filed briefs of biglaw and midsize firms. While this type of analysis takes more time than simply looking up a lawyer or law firm ranking, it will be the best indicator of the type of work product to expect from a firm. As with all lawyer-hiring decisions, the individual lawyer’s actual abilities, strategic vision for the litigation and attention to efficiency are key considerations. A lawyer’s association with a biglaw firm name can be a proxy for quality, but it does not ensure quality. Indeed, the opposite can be true -- by paying for the biglaw expertise and experience of a particularly accomplished senior partner (the partner likely to pitch the business), companies often end up with the majority of the work being done by senior associates and junior partners. A company should consider the impact of the economic realities of biglaw vs. non-biglaw firms. Senior partners at biglaw firms, with higher associate-to-partner ratios, must have a lot of matters to keep their junior partners and associates busy, and thus necessarily spend less time on each matter -- even if they have good intentions to devote personal time to a matter. Biglaw firms’ largest clients and cases, moreover, often demand much of a senior partner’s time, at the expense of other cases. And given the reality that partners practice less and less law the more senior they become, it is fair to question whether they are the right people for the job anyway. In contrast, senior partners at non-biglaw firms typically have fewer people to keep busy and have lower billing rates -- which means that they can spend more time working on their cases, and they spend more time actually practicing law. Further, for smaller and less significant projects that should be handled by associates, and should not require the higher billing rates of partners, biglaw is similarly unable to offer a cost-effective solution for companies. Associates at biglaw firms typically have less hands-on experience than their counterparts at mid-sized firms. In litigation, for example, biglaw associates generally spend their first few years solely on discovery or discrete research projects. The result is that many projects that could be handled by a junior or mid-level associate at a mid-sized firm would have to be handled by a senior associate or junior partner at a biglaw firm. So, even putting aside differences in billing rates between a fifth-year biglaw associate and a fifth-year midsize firm associate, going with a biglaw firm typically means that projects are being assigned to attorneys too senior (and accordingly too costly) to be handling the assignments. Don’t I need biglaw resources? There are two primary answers. First, from both a quality and an efficiency standpoint, securities litigation defense is best handled by a small team through the motion–to-dismiss process. Before a court’s decision on the motion to dismiss, the only key tasks are a focused fact investigation and the briefing on the motion to dismiss. As to both, fewer lawyers means higher quality. If a case survives a motion to dismiss, most firms with a strong litigation department will have sufficient resources to handle it capably. That, of course, is something a company can probe in the hiring process. There are cases that necessarily will require a larger team than some mid-size firms can provide. However, such cases are rare, and it is often the case that biglaw firms, in an effort to maintain associate hours at a certain level, will heavily staff associates on discovery projects such as document review. While the exceptional case will require a team of more than around five associates, for the most part discovery can and should be handled most efficiently by a team of contract attorneys supervised by a small team of associates -- or by utilizing new technologies that allow smaller teams to review documents more efficiently and effectively. Second, as reflected in the HBR article’s discussion of GCs’ answers to the second question, there isn’t a correlation between a firm’s pedigree and its responsiveness -- which is a key facet of law firm resources. Indeed, responsiveness is a function of effort, and it stands to reason that non-biglaw firms will make the necessary effort to give excellent client service. The bottom line of all this is simply common sense: Within the qualified group of lawyers, a company should look for value -- the right mix of experience, expertise, efficiency and cost -- as it does with any significant corporate expenditure.

Douglas Greene

Profile picture for user DouglasGreene

Douglas Greene

Douglas Greene is chair of the Securities Litigation Group at Lane Powell. He has focused his practice exclusively on the defense of securities class actions, corporate governance litigation, and SEC investigations and enforcement actions since 1997. From his home base in Seattle, he defends public companies and individual directors and officers in such matters around the United States.

How Private Health Exchanges Can Win

The opportunity is huge for companies to serve employer groups much as the ACA's public healthcare exchanges are serving individuals.

As the various public healthcare exchanges have gained more publicity, employers are increasingly aware of the availability of their private sector counterpart.  A legion of brokers, third party administrators and experienced legacy benefit administrators are striving to reconfigure and brand themselves as a private healthcare exchange (PHX), providing service to employer groups rather than individuals. However, the genuine article is nearly nonexistent. Out of the nearly 100 companies that are identified as a PHX, only a few possess the technology, industry knowledge, backing and other necessary qualities to succeed over the long term. How is the investor, carrier or broker able to evaluate a PHX for partnership and ensure he picks not only a survivor but a winner? There are three essential capabilities any contender must possess.
  • Intuitive shopping experience (i.e. Amazon)
  • Multiple medical carrier and plan options
  • Direct integration of consumer-directed account(s) in both the shopping and enrollment processes
Intuitive shopping The PHX experience must model other consumer Internet shopping experiences in all aspects for universal adoption. If a PHX is unable to do this, brokers, HR administrators and other service providers will engender unsustainable, escalating costs while providing little service. A PHX must move to the self-service model of e-commerce. It is unlikely that the insurance industry, so mired in its own protocols, can design such a system on its own. For the PHX industry to thrive, outside experts from e-commerce must be welcomed inside the business to effectively couple their expertise with that of individuals with deep knowledge of the employee benefits sector. Multiple options It would seem intuitive for an employer to offer employees a range of national and regional insurance carriers. Yet the health insurance industry has always gravitated to restricted choice. It is a golden scenario for a carrier to have enrollees choosing exclusively from its options in an electronic marketplace. This leaves brokers in a precarious position. Although they currently control the health insurance marketplace, brokers are vulnerable to the almost certain risk that the current carrier will raise rates; brokers may lose clients or have to abandon the platform and seek another carrier. The problem is further complicated by several factors. Carriers require digitalization to facilitate rating, enrollment, eligibility and billing. Retroactive risk adjustment is often required to account for employee population variables. Finally, and perhaps most importantly, individual state “exchange shops” mandated for small groups under the Affordable Care Act all have multiple medical carrier options, raising the bar for private healthcare exchanges. High-deductible health plans coupled with health savings accounts are now approaching 50% of plan populations after languishing for years with only 5% to 10% adoption rates. Although the reasons for this increase are not necessarily clear, the statistic is well-documented, as illustrated in the May 2014 joint study by John Young and Todd Berkley. It is clear employees making unfiltered decisions are voting with their feet. Consumer-directed accounts (read health savings accounts) can no longer be treated as just a minor option for early adopters. Integration of accounts While the consumer may choose a high-deductible plan, it is quite possible that when the first claim happens there will not be funds to pay it. It is imperative that the consumer bank account is enrolled concurrently with enrollment into the medical plan. This does not take place in most situations today. The importance of assessing other mechanisms of providing the consumer liquidity cannot be overstated as a means to ensure accounts are adequate to pay claims under deductible or co-pay responsibilities. Conclusion While there are stiff challenges, an incredible opportunity exists to offer a PHX that is an integrated, superior product that belies the complexity underlying the system it serves.

Robert Anderson

Profile picture for user robertanderson

Robert Anderson

Bob developed the foundation of his skill set during the growth of Anderson & Anderson Insurance Brokers from a boutique firm to a top 50 property and casualty insurance brokerage business in the U.S. Bob attended the Harvard Business School Owner/President Management Program and did his undergraduate work at Claremont McKenna College.

3 Common Errors in Managing Claims

For one, companies only have too many people involved rather than assigning responsibility to a single person.

||||||||

No one wants to deal with a property claim. Unfortunately, claims do happen, and that is why you buy insurance. There are right ways and wrong ways to manage a claim -- here are three common mistakes and how to avoid making them:

Too many cooks...

cooks

One of the first things you should do after a loss is assign a point person to handle communication and dissemination of information to the insurance company. Oftentimes, this role defaults to the risk manager, but she is not always the best choice. Obviously, the risk manager needs to be part of the team, but you need someone who can dedicate a substantial amount of time to the claim. This ensures consistent communication and avoids the insurance team's relying on information that has not been vetted.




Not controlling the schedule...

stickies

As with most projects, planning and execution are necessary for a successful outcome. It is critical in the claims process to assign responsibility to the team members at the policy holder and require that they provide information in a timely manner. This compels the insurance company to provide feedback in a similar fashion. A timeline should be established early on, and the parties should be held to it. For example, claims will be submitted by the fifth day of the month; feedback will be provided by the 15th day of the month; and payment will be received by the end of the month. Scheduling like this can improve cash flow and ensure progress on the claim. Get the parties to commit to this early!

Unreasonable expectations...

pie

It's true that the insurance company is not likely to accept your entire claim, but building up your claim to unrealistic expectations is not the answer. By claiming a "pie-in-the-sky" number, you can hurt your credibility and dramatically slow down or prevent a reasonable settlement. The better approach is to present a reasonable claim that is fully documented. This prepares you to counter the insurance company's rebuttal with confidence. It's reasonable to be aggressive, and, by all means, do not lower you claim in anticipation of pushback from the insurer. Just do not build up the claim to unrealistic totals with the plan to fall back to a lower position -- this gives all the credibility to the insurance company.


William Myers

Profile picture for user WilliamMyers

William Myers

Bill Myers is a co-founder of RWH Myers. He has more than 30 years of forensic accounting and investigative experience,representing companies across a wide range of industries, including energy and petrochemical,forest products, pharmaceutical, manufacturing, transportation, technology, hospitality, health care, packaging, distribution and retail.


Christopher Hess

Profile picture for user ChristopherHess

Christopher Hess

Christopher B. Hess is a partner in the Pittsburgh office of RWH Myers, specializing in the preparation and settlement of large and complex property and business interruption insurance claims for companies in the chemical, mining, manufacturing, communications, financial services, health care, hospitality and retail industries.

The C-Suite View on Employer Costs

Panelists at the California Workers Comp & Risk Conference see potential for a major problem with "presumption claims."

An open mic session at the California Workers Comp & Risk Conference in Dana Point featured insurance industry leaders identifying emerging market trends that are important to employers in California. Panelists were: moderator Pamela Ferrandino, national practice leader at Willis North America; Bill Rabl, chief operating officer at ACE Risk Management; Robert Darby, president at Berkshire Hathaway Homestate and former chairman of WCIRB; Duane Hercules, president at Safety National; and Michele Tucker, vice president at CorVel. The panelists indicated that their short-term outlook on rates was flat to slightly higher, but not as high as over the last couple of years. For first-dollar accounts (those with no deductible), competition is increasing because there are more carriers entering the California marketplace. For the self-insured and those with large deductibles, the rate tends to matter less than the amount of risk retained by the employer, because the goal of these loss-sensitive programs is for the carrier to only cover unusual claims such as catastrophic injuries. Managing medical costs also continues to be a challenge. Opioids are still driving costs, so there must be an aggressive pharmacy management program in place. The industry is starting to see complications such as organ damage arise from opioid abuse. This could become a cost driver. Almost half the opioids in California are dispensed by physicians, so it may be necessary to address this issue legislatively, as other states have done. Predictive analytics are becoming increasingly important in the workers’ compensation industry. Some third-party administrators (TPA)s and carriers are doing excellent work in using psychosocial questions to identify issues that could complicate claims handling and increase costs. This allows them to intervene and devote additional resources to these claims. Analytics are also useful in the pricing process to assist carriers in identifying accounts that are performing above and below average and trends related to them. Municipalities face significant, long-tail impact from presumption claims (for diseases that have uncertain origins but that may be presumed to have been caused by an occupation). Defending against these claims is extremely difficult, and, once accepted, the claims have a tendency expand. Claims for high blood pressure can eventually morph into claims for advanced heart disease or a heart attack. In many municipalities, a large percentage of police officers and firefighters retire under presumption claims. There are currently bills sitting on the governor’s desk that would expand presumption laws in California, including one bill that would create presumptions for certain healthcare workers in the private sectors. If these bills are signed, they will increase California municipalities’ workers’ compensation costs even more. Finally, panelists were asked what they expect the key issues will be three years from now. Panelists predicted that mobile technology and the ability to communicate with injured workers will advance through apps that help with early intervention. They also expect to see an increased focus on wellness to address co-morbidities. Finally, everyone anticipates that within three years we will be talking about yet another California workers’ compensation reform bill and the continued expansion of presumption laws.