Download

How to Measure Data Breach Costs?

A dispute between the Ponemon Institute and Verizon over how to estimate the value of a data breach may complicate the calculations.

Businesses typically have a hard time quantifying potential losses from a data breach because of the myriad factors that need to be considered. A recent disagreement between Verizon and the Ponemon Institute about the best approach to take for estimating breach losses could make that job a little harder. For some time, Ponemon has used a cost-per-record measure to help companies and insurers get an idea of how much a breach could cost them. Its estimates are widely used. The institute recently released its latest numbers showing that the average cost of a data breach has risen from $3.5 million in 2014 to $3.8 million this year, with the average cost per lost or stolen record going from $145 to $154. Infographic: Data breaches drain profits The report, sponsored by IBM, showed that per-record costs have jumped dramatically in the retail industry, from $105 last year to $165 this year. The cost was highest in the healthcare industry, at $363 per compromised record. Ponemon has released similar estimates for the past 10 years. But, according to Verizon, organizations trying to estimate the potential cost of a data breach should avoid using a pure cost-per-record measure. Free IDT911 white paper: Breach, Privacy, And Cyber Coverages: Fact And Fiction ThirdCertainty spoke with representatives of both Verizon and Ponemon to hear why they think their methods are best. Verizon’s Jay Jacobs Ponemon’s measure does not work very well with data breaches involving tens of millions of records, said Jay Jacobs, Verizon data scientist and an author of the company’s latest Data Breach Investigations Report (DBIR). Jacobs says that, when Verizon applied the cost-per-record model to breach-loss data obtained from 191 insurance claims, the numbers it got were very different from those released by Ponemon. Instead of hundreds of dollars per compromised record, Jacobs said, his math turned up an average of 58 cents per record. Why the difference? With a cost-per-record measure, the method is to divide the sum of all losses stemming from a breach by the total number of records lost. The issue with this approach, Jacobs said, is that cost per record typically tends to be higher with small breaches and drops as the size of the breach increases. Generally, the more records a company loses, the more it’s likely to pay in associated mitigation costs. But the cost per record itself tends to come down as the breach size increases, because of economies of scale, he said. Many per-record costs associated with a breach, such as notification and credit monitoring, drop sharply as the volume of records increase. When costs are averaged across millions of records, per-record costs fall dramatically, Jacobs said. For massive breaches in the range of 100 million records, the cost can drop to pennies per record, compared with the hundreds and even thousands of dollars that companies can end up paying per record for small breaches. “That’s simply how averages work,” Jacobs said. “With the megabreaches, you get efficiencies of scale, where the victim is getting much better prices on mass-mailing notifications,” and most other contributing. Ponemon’s report does not reflect this because its estimates are only for breaches involving 100,000 records or fewer, Jacobs said. The estimates also include hard-to-measure costs, such as those of downtime and brand damage, that don’t show up in insurance claims data, he said. An alternate method is to apply more of a statistical approach to available data to develop estimated average loss ranges for different-size breaches, Jacobs said While breach costs increase with the number of records lost, not all increases are the same. Several factors can cause costs to vary, such as how robust incident response plans, pre-negotiated contracts for customer notification and credit monitoring are, Jacobs said. Companies might want to develop a model that captures these variances in costs in the most complete picture possible and to express potential losses as an expected range rather than use per-record numbers. Using this approach on the insurance data, Verizon has developed a model that, for example, lets it say with 95% confidence that the average loss for a breach of 1,000 records is forecast to come in at between $52,000 and $87,000, with an expected cost of $67,480. Similarly, the expected cost for a breach involving 100 records is $25,450, but average costs could range from $18,120 to $35,730. Jacobs said this model is not perfectly accurate because of the many factors that affect breach costs. As the number of records breached increases, the overall accuracy of the predictions begins to decrease, he said. Even so, the approach is more scientific than averaging costs and arriving at per-record estimates, he said. Ponemon’s Larry Ponemon Larry Ponemon, chairman and founder of the Ponemon Institute, stood by his methodology and said the estimates are a fair representation of the economic impact of a breach. Ponemon’s estimates are based on actual data collected from individual companies that have suffered data breaches, he said. It considers all costs that companies can incur when they suffer a data breach and includes estimates from more than 180 cost categories in total. By contrast, the Verizon model looks only at the direct costs of a data breach collected from a relatively small sample of 191 insurance claims, Ponemon said. Such claims often provide an incomplete picture of the true costs incurred by a company in a data breach. Often, the claim limits also are smaller than the actual damages suffered by an organization, he said. “In general, the use of claims data as surrogate for breach costs is a huge problem, because it underestimates the true costs” significantly, Ponemon said. Verizon’s use of logarithmic regression to arrive at the estimates also is problematic because of the small data size and the fact the data was not derived from a scientific sample, he said. Ponemon said the costs of a data breach are linearly related to the size of the breach. Per-record costs come down as the number of records increases, but not to the extent portrayed by Verizon’s estimates, he said. “I have met several insurance companies that are using our data to underwrite risk,” he said.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Risk Management for Human Capital

Human capital is our biggest asset, and programs for leadership development can help employers attract and keep great people.

||||
A contractor’s most important resource, and one of its leading costs, is its employees. By investing in employee, supervisory and leadership development programs, those in construction and facilities management (CFMs) can expect positive ROI and other measurable outcomes in both risk management and human capital. This strategy combines organizational development practices to leverage human capital risk management and protect a company’s bottom line. What Is Human Capital Risk Management? Defined as leveraging human resource assets to achieve an organization’s strategic and operational goals, human capital risk management implies the following realities for CFMs to consider:
  • Human capital is a tangible asset
  • Human capital yields tangible and intangible results
  • Human capital can generate a positive or negative rate of return
  • Human capital risk management can create a sustainable competitive advantage
Benefits & Consequences There are numerous benefits of leveraging human capital risk management strategies. Likewise, there are serious consequences for failing to effectively manage human capital risk management strategies. The categories of human capital costs include salaries, health and retirement benefits, workers’ comp and other required insurance costs (e.g., state and federal unemployment taxes). Other possible human capital costs stem from losses attributable to consequences from unsuccessful human capital risk management practices, including: fraud and internal theft; absenteeism; substance abuse; and costs of incidents, accidents and injuries that include workers’ comp losses and resulting third-party liabilities. These costs can be affected by the type of contractor, where the contractor (or project) is located, whether the contractor is union or merit shop and other variables. benef The Shift from HR to Talent Management Two talent pipeline concerns are prevalent in the industry: the looming mass exodus of Baby Boomers from the construction workforce, and concerns about how to engage Millennials long enough to develop their skills and prepare them for future leadership roles. develop Today, senior business leaders are looking to the HR function to provide innovative solutions to attract, retain and grow their talent. The evolution of HR to a talent management model focuses on processes leading to organizational development. As a result, the modern HR department is responsible for seven fundamental functions: 1)    Compliance – Ensure regulatory and legal compliance 2)    Recruitment – Find a work force 3)    Employee Relations – Manage a work force 4)    Retention – Maintain a work force 5)    Engagement – Build an engaged work force 6)    Talent Development – Create a high-performing work force 7)    Strategic Leadership – Plan for a future work force Investing in human capital makes good business sense, especially considering the costs to recruit, onboard and train a new employee. Not only is employment advertising and recruiting costly, but there are also other adverse impacts to the business. Work previously being done by the exiting employee still needs to be completed, so it falls to teammates and the supervisor. A new employee typically does not reach full productivity until at least four to six months into her new role. In total, the lost productivity costs to turn over one employee is at least six months. The Link Between Employee Engagement & Business Performance Engaged employees want both themselves and the company to succeed. However, companies often only focus on employee satisfaction, which can lead to complacency and a sense of entitlement. Employee engagement is frequently defined as the discretionary effort put forth by employees – going above and beyond to make a difference in their work. Discretionary effort is the extra effort employees want to give because of the emotional commitment they have to their organization. Unlocking employee potential to drive high performance results in business success. However, according to research by the Employee Engagement Group, 70% of all employees from all industries are disengaged. Employees with lower engagement are four times more likely to leave their jobs than highly engaged employees. And disengaged managers are three times more likely to have disengaged employees. Research shows employees become more engaged when business leaders are trusted, care about their employees and demonstrate competence. By working to engage their employees, contractors can improve their productivity, innovation and customer service. They can reduce incident rates and decrease voluntary attrition. One of the earliest links between employee satisfaction and business performance appeared in First, Break All the Rules: What the World’s Greatest Managers Do Differently, which includes a cross-industry study that demonstrated a clear link among four business performance outcomes: productivity, profitability, employee retention and customer satisfaction. 12 q The organizations that ranked in the top quartile of that exercise reported these performance outcomes associated with increased employee engagement:
  • 50% more likely to have lower turnover
  • 56% more likely to have higher-than-average customer loyalty
  • 38% more likely to have above-average productivity
  • 27% more likely to report higher profitability
Recognizing and acting on the correlation between engaged employees and business performance will directly affect the bottom line. Some strategies employers can implement to increase employee engagement include:
  • Focus on purpose and values vs. policies and procedures, which has led companies to outperform their competitors by six times.
  • Encourage empowerment and innovation, then reinforce and reward the right behaviors.
  • Unleash the flow of information and ensure individuals have a clear understanding of how their particular job contributes to the company’s strategy and mission.
  • Understand and demonstrate that work/life balance is important.
Developing Sustainable Leadership & Human Capital Strategies Many organizations are hyperfocused on implementing training programs and processes. However, training should not be the only activity. Effective human capital management demands forward thinking and strategic planning about how contractors can engage their human resources to make a difference in driving the business forward into the future. A spectrum of sustainable employee, supervisory and human capital and leadership development strategies includes orienting/onboarding, performance reviews and developmental plans, coaching/mentoring, job rotation and cross-training, 360-degree feedback surveys, defining career paths, work/life balance and competency assessment. Research & Connect With Peers Developing a sustainable human capital development program can seem overwhelming, but that does not need to be the case. Reach out and connect with peers and subject matter experts to identify and share best practices and challenges. There are many resources available that can be tailored or adapted to meet your business needs. Define & Align Sustainable Long-Term Human Capital Strategies It is essential to not only align human capital strategies with core business strategies but to also continually review them to ensure long-term sustainability and to address areas for development and improvement. Connecting these areas of focus will ensure a consistent vision is communicated and executed throughout the organization. To gain a better understanding of your company’s human capital strategic thinking and planning, conduct a needs assessment or gap analysis. Based on the results, a human capital action plan can be developed to help guide your company’s future human capital leadership and investment. Integrate Human Capital Strategies With Organizational Culture All human capital strategies should closely align with a company’s intended organizational culture. The strategies may require a shift in culture, but not so much that it creates implementation barriers. Having a formal rollout and communication plan developed in advance will help prepare employees for the coming change. A variety of communication approaches helps to reach all intended stakeholders and should include what, why and expected outcomes. To ensure all employees “hear” the message, communicate strategies that outline a clear plan and are easy to follow through creative visual and auditory media. Examples include interactive meetings to communicate coming changes, postcards with graphics that present the message, e-mails that are fun and positive, conference calls so people can participate regardless of location, as well as podcasts, webinars, Skype, etc. Implement Talent Review & Succession Planning To create a culture of learning and development, contractors should include all employees in their talent development practices rather than focus only on preconceived “high potentials.” Through an effective talent review process, managers can determine the potential future and developmental needs of all employees. Effective talent review discussions will unveil high-potential employees, which will help populate employee development and succession plans. True high potentials should be given stretch goals to be accomplished throughout the year to aid in assessing and developing their readiness for future roles. Everyone Is a Leader At the end of the day, it is not realistic to expect companies to provide the same training to all employees. However, it is important to remember that everyone is a leader in what they do. Setting these expectations better prepares employees for future leadership roles and helps to build accountability across the company. Importantly, not all leadership competencies and behaviors will apply to every position. However, by consistently applying higher performance expectations across the organization, employees who were not previously considered high-potentials might begin to excel and even surpass previously identified potential levels. You never know when a new rock star employee will emerge! Case Study: Lakeside Industries’ Annual Leadership Conference Lakeside Industries, Issaquah, WA, is a third-generation family-owned business operating for more than 60 years. A producer of hot mix asphalt and paving contractor with 20 locations in Washington, Oregon and Idaho, Lakeside Industries has a total of 625 employees and is signatory to various locals of three labor unions: Laborers, Operating Engineers and Teamsters. The vision of the company’s third-generation President Michael Lee is to “attain exceptional performance in everything we do.” In this case, “exceptional” has been further defined as aspiring to attain “world-class” performance. He says: "Several years ago, we realized the need to invest in its leaders. We know that effective leaders translate to improved quality, employee engagement, better communication, fewer incidents, higher production, etc. Each of our 12 divisions operates as an individual entity with its own crews, shops, plants, and fleets. Geography and diversity produce challenges related to training. "We started with two groups. Managers and PMs were in one group, and superintendents were in the other. Each group met once a year locally to share ideas, procedures and challenges. "We also used this time to conduct leadership training. Sometimes instruction was internal, and sometimes we brought in external experts. While this was a great start, we knew we needed more consistency communicating company objectives, ethics and expectations. Many of our foremen never had any formal leadership training. "So, for the past few years, we’ve had one annual meeting that includes every employee in a leadership position. Managers, PMs, superintendents, foremen and anyone who supervises another employee is invited; about 175 people attend annually. To remove distractions, we hold the three-day meeting in Denver, CO. "Each year we decide which company goals are our top priorities. We bring in a speaker to communicate those goals and to motivate and train our leaders. "A very popular component is the breakout sessions. All PMs and superintendents meet in groups, as do paving foremen, project superintendents, traffic control supervisors and so on. There are usually 10-12 breakout groups that are conducted by a facilitator in a roundtable format to address issues specific to their positions. We have also conducted breakout sessions by division. It’s an opportunity for division leaders to communicate outside of their daily busy environments and set goals for the coming year. We ensure training is interactive and effective. There is also time for relationship building with recreational activities. "An important component of this concept is follow-up. It’s essential to repeat and reinforce what was learned when we return home to our busy routines. HR and risk management/safety work with division managers to integrate learned concepts into daily operations. Key learning points are communicated to all of our employees. "Our vision is for the entire company – from divisional and departmental managers to field staff – to understand and implement our goals and expectations. We want all employees on the same ship, sailing in the same direction, and we work on this all year. "We started with the goal of training effective leaders, but we’ve unexpectedly achieved so much more. There is improved communication among peer groups including:
  • we have innovation, new lines of communication, collaboration and lasting relationships;
  • our leaders are now united and understand the company’s vision; and
  • our leaders make better decisions and communicate more effectively, resulting in more engaged employees, improved quality, and what we call safe production.
"The bottom line: This leadership conference is absolutely worth the investment." Conclusion As the construction labor market tightens because of demographic, societal and industry shifts, finding and keeping skilled workers will become increasingly challenging. Progressive workforce development strategies can differentiate contractors as employers of choice. constr CFMs who think strategically recognize that employee, supervisory and leadership development programs, processes and practices can provide a competitive advantage. Investments in human capital yield tangible and intangible gains that improve productivity, quality, risk, safety and financial performance. This should neither be unexpected nor surprising: after all, people are our greatest asset. This article was co-written by Tana Blair and Tammy Vibbert. Tana Blair is responsible for organizational and leadership development at Lakeside Industries in Issaquah, WA. S he can be reached attana.blair@lakesideindustries.com. Tammy Vibbert is the director of human resources at Lakeside Industries in Issaquah, WA. She can be reached at tammy.vibbert@lakesideindustries.com. Copyright © 2015 by the Construction Financial Management Association. All rights reserved. This article first appeared in CFMA Building Profits. Reprinted with permission. 

Calvin Beyer

Profile picture for user CalBeyer

Calvin Beyer

Cal Beyer is the vice president of Workforce Risk and Worker Wellbeing. He has over 30 years of safety, insurance and risk management experience, including 24 of those years serving the construction industry in various capacities.

A Surprising Health Risk: Loneliness

More people are professing loneliness in their lives, and more evidence is piling up that loneliness, like dissatisfaction in life, is a killer.

Loneliness is both sad and a major health risk. More and more people are professing loneliness in their lives, and more and more evidence is piling up that loneliness, like dissatisfaction in life, is a killer. Here are some personal observations:
  • Why do many people have so few friends as they age? Maintaining long-term friendships takes a lot of work and investment of time. Don’t let your career stand in the way. Don’t wait for someone to befriend you; reach out.
  • Some people have invested their time and energy solely in a spouse, who may predecease them by 25 years, or in children who fly the nest in time.
  • Many people have invested much in work-related friendships, which, while genuine at the time, can wilt almost immediately when they retire.
  • In friendships, one has to give more than he or she takes. Make yourself likable. Who wants to spend time with someone who complains all the time? People like that are often avoided by people around them.
  • Be a good listener.
  • If you’re lonely, try joining something…a church, a book club, a hiking club, anything.
In the end, a true measure of wealth is the number of lifelong friends we have. Having lifelong friends is a joy and a great cure for loneliness.

Tom Emerick

Profile picture for user TomEmerick

Tom Emerick

Tom Emerick is president of Emerick Consulting and cofounder of EdisonHealth and Thera Advisors.  Emerick’s years with Wal-Mart Stores, Burger King, British Petroleum and American Fidelity Assurance have provided him with an excellent blend of experience and contacts.

Bonds Away: Market Faces Major Shift

Bonds, long a safe haven for investors, will not be for long, as interest rates rise and government issuers and mega-buyers face new risks.

||
As we are sure you are aware, the financial markets have had a bit of a tough time going anywhere this year. The S&P 500 has been caught in a 6% trading band all year, capped on the upside by a 3% gain and on the downside by a 3% loss. It has been a back-and-forth flurry. We’ve seen a bit of the same in the bond market. After rising 3.5% in the first month of the year, the 10-year Treasury bond has given away its entire year-to-date gain and then some as of mid-June. 2015 stands in relative contrast to largely upward stock and bond market movement over the past three years. What’s different this year, and what are the risks to investment outcomes ahead? As we have discussed in recent notes, the probabilities are very high that the U.S. Federal Reserve will raise interest rates this year. We have suggested that the markets are attempting to “price in” the first interest rate increase in close to a decade. We believe this is part of the story in why markets have acted as they have in 2015. But there is a much larger longer-term issue facing investors lurking well beyond the short-term Fed interest rate increase to come. Bond yields (interest rates) rest at generational lows and prices at generational highs -- levels never seen before by today's investors. Let’s set the stage a bit, because the origins of this secular issue reach back more than three decades. It may seem hard to remember, but in September 981, the yield on the 10-year U.S. Treasury bond hit a monthly peak of 15.32%. At the time, Fed Chairman Paul Volcker was conquering long-simmering inflationary pressures in the U.S. economy by raising interest rates to levels no one had ever seen. Thirty-one years later, in July 2012, that same yield on 10-year Treasury bonds stood at 1.53%, a 90% decline in coupon yield, as Fed Chairman Bernanke was attempting to slay the perception of deflation with the lowest level of interest rates investors had ever experienced. This 1981-present period encompasses one of the greatest bond bull markets in U.S. history, and certainly over our lifetimes. Prices of existing bonds rise when interest rates fall, and vice versa. So from 1981 through the present, bond investors have been rewarded with coupon yield (continuing cash flow) and rising prices (price appreciation via continually lower interest rates). Remember, this is what has already happened. As always, what is important to investors is not what happened yesterday, but rather what they believe will happen tomorrow. And although this is not about to occur instantaneously, the longer-term direction of interest rates globally has only one road to travel – up. The key questions ultimately being, how fast and how high? This is important for a number of reasons. First, for decades bond investments have been a “safe haven” destination for investors during periods of equity market and general economic turmoil. That may no longer be the case as we look ahead. In fact, with interest rates at generational lows and prices at all-time highs, forward bond market price risk has never been higher. An asset class that has always been considered safe is no longer, regardless of what happens to stock prices. We need to remember that so much of what has occurred in the current market cycle has been built on “confidence” in central bankers globally. Central bankers control very short-term interest rates (think money market fund rates). Yes, quantitative easing allowed these central banks to print money and buy longer-maturity bonds, influencing longer-term yields for a time. That’s over for now in the U.S., although it is still occurring in Japan and Europe. So it is very important to note that, over the last five months, we have witnessed the 10-year U.S. Treasury yields move from 1.67% to close to 2.4%, and the Fed has not lifted a finger. In Germany, the yield on a 10-year German Government Bund was roughly .05% a month ago. As of this writing, it has risen to 1%. That’s a 20-fold increase in the 10-year interest rate inside of a month’s time. For a global market that has risen at least in part on the back of confidence in central bankers, this type of volatility we have seen in longer-term global bond yields as of late implies investors may be concerned central bankers are starting to “lose control” of their respective bond markets. Put another way? Investors may be starting to lose confidence in central bank policies being supportive of bond investments -- not a positive in a cycle where this buildup of confidence has been such a meaningful support to financial asset prices. You may remember that what caused then-Fed Chairman Paul Volcker to drive interest rates up in the late 1970s was embedded inflationary expectations on the part of investors and the public at large. Volcker needed to break that inflationary mindset. Once inflationary expectations take hold in any system, they are very hard to reverse. A huge advantage for central bankers being able to “print money” in very large magnitude in the current cycle has been that inflationary expectations have remained subdued. In fact, consumer price indexes (CPI) as measured by government statistics have been very low in recent years. When central bankers started to print money, many were worried this currency debasement would lead to rampant inflation. Again, that has not happened. We have studied historical inflationary cycles and have not been surprised at outcomes in the current cycle in the least. For the heightened levels of inflation to sustainably take hold, wage inflation must be present. Of course, in the current cycle, continued labor market pressures have resulted in the lowest wage growth of any cycle in recent memory. But is this about to change at the margin? The chart below shows us wage growth may be on the cusp of rising to levels we have not yet seen in the current cycle on the upside. Good for the economy, but not so good for keeping inflationary pressures as subdued as has been the case since 2009. g1 You may be old enough to remember that bond investments suffered meaningfully in the late 1970s as inflationary pressures rose unabated. We are not expecting a replay of that environment, but the potential for rising inflationary expectations in a generational low-interest-rate environment is not a positive for what many consider “safe” bond investments. Quite the opposite. As we have discussed previously, total debt outstanding globally has grown very meaningfully since 2009. In this cycle, it is the governments that have been the credit expansion provocateurs via the issuance of bonds. In the U.S. alone, government debt has more than doubled from $8 trillion to more than $18.5 trillion since 2009. We have seen like circumstances in Japan, China and part of Europe. Globally, government debt has grown close to $40 trillion since 2009. It is investors and in part central banks that have purchased these bonds. What has allowed this to occur without consequence so far has been the fact that central banks have held interest rates at artificially low levels. Although debt levels have surged, interest cost in 2014 was not much higher than we saw in 2007, 2008 and 2011. Of course, this was accomplished by the U.S. Fed dropping interest rates to zero. The U.S. has been able to issue one-year Treasury bonds at a cost of 0.1% for a number of years. 0% interest rates in many global markets have allowed governments to borrow more both to pay off old loans and finance continued expanding deficits. In late 2007, the yield on 10-year U.S. Treasuries was 4-5%. In mid-2012, it briefly dropped below 1.5%. So here is the issue to be faced in the U.S., and we can assure you that conceptually identical circumstances exist in Japan, China and Europe. At the moment, the total cost of U.S. Government debt outstanding is approximately 2.2%. This number comes directly from the U.S. Treasury website and is documented monthly. At that level of debt cost, the U.S. paid approximately $500 billion in interest last year. In a rising-interest-rate environment, this number goes up. At just 4%, our interest costs alone would approach $1 trillion -- at 6%, probably $1.4 trillion in interest-only costs. It’s no wonder the Fed has been so reluctant to raise rates. Conceptually, as interest rates move higher, government balance sheets globally will deteriorate in quality (higher interest costs). Bond investors need to be fully aware of and monitoring this set of circumstances. Remember, we have not even discussed the enormity of off-balance-sheet government liabilities/commitments such as Social Security costs and exponential Medicare funding to come. Again, governments globally face very similar debt and social cost spirals. The “quality” of their balance sheets will be tested somewhere ahead. Our final issue of current consideration for bond investors is one of global investment concentration risk. Just what has happened to all of the debt issued by governments and corporations (using the proceeds to repurchase stock) in the current cycle? It has ended up in bond investment pools. It has been purchased by investment funds, pension funds, the retail public, etc. Don Coxe of Coxe Advisors (long-tenured on Wall Street and an analyst we respect) recently reported that 70% of total bonds outstanding on planet Earth are held by 20 investment companies. Think the very large bond houses like PIMCO, Blackrock, etc. These pools are incredibly large in terms of dollar magnitude. You can see the punchline coming, can’t you? If these large pools ever needed to (or were instructed to by their investors) sell to preserve capital, sell to whom becomes the question? These are behemoth holders that need a behemoth buyer. And as is typical of human behavior, it’s a very high probability a number of these funds would be looking to sell or lighten up at exactly the same time. Wall Street runs in herds. The massive concentration risk in global bond holdings is a key watch point for bond investors that we believe is underappreciated. Is the world coming to an end for bond investors? Not at all. What is most important is to understand that, in the current market cycle, bonds are not the safe haven investments they have traditionally been in cycles of the last three-plus decades. Quite the opposite. Investment risk in current bond investments is real and must be managed. Most investors in today’s market have no experience in managing through a bond bear market. That will change before the current cycle has ended. As always, having a plan of action for anticipated market outcomes (whether they ever materialize) is the key to overall investment risk management.

Brian Pretti

Profile picture for user BrianPretti

Brian Pretti

Brian Pretti is a partner and chief investment officer at Capital Planning Advisors. He has been an investment management professional for more than three decades. He served as senior vice president and chief investment officer for Mechanics Bank Wealth Management, where he was instrumental in growing assets under management from $150 million to more than $1.4 billion.

Federal Health Rule Hits Firms for Millions

A change to federal health rules on "embedded MOOP" threatens to cost employers, and employees, hundreds of millions of dollars.

America’s Health Insurance Plans (AHIP) gather for their big meeting in Nashville this week, with many significant issues on the agenda, some of them headline news. For instance, industry insiders are watching closely the Supreme Court’s pending decision this month on King v. Burwell—which could remove health insurance subsidies in states that opted out of Obamacare’s Medicaid expansion. There’s a less well-known but extremely important issue many business leaders want AHIP to tackle this week: “embedded MOOP.” That sounds like perhaps a form of fertilizer that could be used in a garden but actually refers to a, well, variant of fertilizer that Washington is known for producing: Embedded MOOP is a brand new regulation threatening to cost employers and other purchasers hundreds of millions of dollars this year alone. MOOP stands for “Maximum Out of Pocket,” and it refers to the maximum amount your health plan will require you to pay for your health services in a given year—over and above what you contribute to your premiums. After you’ve paid out your deductible and copays and reached the MOOP, your health plan pays 100% of your subsequent bills for the rest of the year. “Embedded MOOP” focuses on the out of pocket maximums applied to family plans. Typically, the MOOP for a family plan is two or three times higher than the MOOP for an individual plan. So, say your plan has a MOOP of $6,000 for individuals and $12,000 for families. You have a hospital stay that costs $50,000, for which your plan pays 80%, so you are responsible for the remaining 20%, or $10,000. If you have an individual plan, you won’t have to pay the full $10,000, because you would hit the maximum out of pocket cap at $6,000. But if you are part of a family plan, you would, because you haven’t hit the family plan maximum of $12,000. That’s how things worked until a couple months ago, according to government directive. But now, for certain kinds of high-deductible health plans, the federal government just issued an ironically named “clarification,” which confusingly reverses those earlier requirements, effective immediately, or maybe effective in 2016--the lawyers watching this say the regulation can be read either way. The “clarification” to federal health rules says that MOOP applies separately to each individual “embedded” in a family plan, so each person covered under a plan has the individual cap. Back to you in your hospital bed with the $50,000 bill: In this new interpretation of federal health rules, you won’t pay more than the $6,000 individual MOOP regardless of whether you are covered under a family plan or an individual plan. Admittedly, a $4,000-plus windfall sounds like good news. Who cares if health plans don’t like it? But here’s the problem: The plan doesn’t pay the $4,000; your employer does--and so do you. AHIP estimates 17.5 million Americans are enrolled in the kind of plans subject to this federal health rule on embedded MOOP. We might reasonably estimate that 3% of them, or about 500,000 people, will encounter a major hospital bill this year. If employers lose thousands of dollars on half of them, or even a quarter of them, there’s not enough room on my calculator for the zeroes in the dollar-figure estimate of loss. The bottom line: Employers will be out hundreds of millions of dollars because federal officials changed the rules mid-game. Employers have to cover this loss right now, so many are hastily redrafting their HR budgets as you read this. The money will come from employee premiums, lower wage increases, reduced benefits or creating fewer jobs. And even though the new regulation sounds friendly to families on its face, in fact it makes already expensive family coverage even less affordable, because family premiums are likely to skyrocket with this new rule in place. This is not the first time lawmakers cavalierly forced business to shoulder a major new healthcare cost. In fact, it’s a tradition. Commercially insured patients pay orders of magnitude more for each individual service than taxpayer-funded payers like Medicare and Medicaid do. That amounts to a subsidy to the tune of hundreds of billions of dollars transferred wholesale to the healthcare system from the workers in America’s economy. Policymakers don’t need to send certificates of appreciation to purchasers for their willingness to pay for the U.S. healthcare system. But, at the very least, government could stop scolding and punishing business for that investment. Alas, in 2018, employers will be hit with the so-called Cadillac tax, an excise tax on purchasers that have the audacity to spend too much on healthcare. And last year, purchasers were admonished by a federal agency for investing in employee wellness programs that they designed explicitly in line with Obamacare. Now, we have the embedded MOOP pummeling of 2015. In the short term, the administration needs to revisit this regulation pronto, and we hope to see AHIP make the case this week. In the long run, it’s time lawmakers treated purchasers’ role in healthcare with less disregard and more common cause.

3 Criticisms of ERM: Justified?

Criticisms of ERM can be justified when a program is executed poorly, but issues are often taken out of the proper context.

A large retailer gets hacked, and customer data is taken, which costs millions in expense and lost revenues. A product recall is perceived to be badly handled, which tarnishes a manufacturer’s reputation and seriously erodes revenue, as well as margins. An acquisition fails to produce the expected profit lift and hurts a technology company’s share price. These organizations have implemented ERM, and, clearly, ERM has failed. Or has it? Let's look at three criticisms of ERM: ERM Cannot Identify and Protect Against All Significant Uncertainties This criticism is fair in the most literal sense only. Even a very robust and well-administered ERM process cannot find every major risk that an organization is subject to, nor can it protect against all risks, whether identified or not. However, without ERM, the ability to identify a majority of significant uncertainties facing an organization is greatly diminished. Not only that, without an ERM approach to risk, the mitigation of known risks is more likely to be addressed silo by silo even when an enterprise-wide solution is necessary. In addition, with ERM, organizations are generally better prepared to rebound from unexpected, unidentified risks that do hit them. For example, ERM organizations typically have very robust business continuity and business recovery plans, have done tabletop exercises or drills that simulate a crisis and have maintained a lessons-learned and special expertise file that can be called upon, as needed. According to a post by Carrier Management, citing RIMS, “A whopping 77% of risk management professionals credit enterprise risk management with helping them spot cyber risks at their companies." These survey results do not suggest that chief risk officers or risk managers, who are responsible for the ERM process, are cyber experts or that all cyber risks can be specifically ascertained. Rather, the survey suggests that ERM better positions a company to discover cyber risks, just as it does with other categories of risk. If ERM can reduce business uncertainties and surprises by identifying risks and managing them better than other forms of risk management, despite not being able to do so 100% of the time, it has not failed. In fact, it has most probably added great value. Consider a CEO who can avoid even one unnecessary sinking feeling when realizing that a risk that should have been spotted and dealt with has hit the company. How much is it worth to that CEO to prevent that feeling? ERM Focuses on the Negative Rather Than the Positive This criticism is not fair in any sense. It requires an upside-down view of ERM. Think about it. In almost any definition of ERM, there is some sort of statement as to the purpose or mission of ERM. The purpose is to better ensure that the organization achieves its strategy and objectives. What could be more positive? By dealing with risks that challenge the ability of the organization to meet its targets, ERM is fulfilling an affirmative and important task. That most risks pose a threat is not disputed. But by removing, avoiding, transferring or lessening threats, organizations have a better chance of succeeding. This is not the only positive result that can emanate from ERM’s handling of risk. Often, a thorough examination of a risk will result in opportunities being uncovered. The opportunity could take the form of innovating a product or entering a new market or creating a more efficient workflow. Consider a manufacturer that builds a more ergonomic chair because it has identified a heightened risk of lawsuits arising from some new medical diagnoses of injuries caused by a certain seat design. Or, consider an amusement park that is plagued by its patrons throwing ticket stubs and paper maps on the ground, thereby creating a hazard when wet or covering dangerous holes or obstacles. Imagine that the company decides to reduce the risk by increasing debris pick-up and offering rewards to patrons for turning in paper to central depositories, then turns it into “clean” confetti sold to a party goods manufacturers. These are hypothetical examples, but real-life examples do exist. Some are quite similar to these. Many risk managers, unfortunately, are reticent to share their success stories in turning risk into a reward. For that matter, many are reluctant to share their successes of any kind. One could speculate why this is so. It may be as simple as not wanting to tempt the gods of chance. ERM Is Too Expensive Those who criticize ERM for being too expensive to implement may lack information or perspective. Consider the following questions:
  • Has ERM been in place long enough to produce results?
  • Has the organization started to measure the value of ERM (there are ways to measure it)?
  • Can an organization place a dollar value on avoiding a strategic risk or a loss that does not happen; does it need to?
  • Has the number of surprises diminished?
  • Are there successes along with failures?
  • How much is it worth to enhance the company’s reputation because it is seen as a responsible, less volatile company because of ERM?
  • How efficiently has the ERM process been implemented?
  • Is too much time being spent on selling the concept rather than implementing the concept?
  • Has the process and reporting of ERM results been kept clear and simple?
To answer the criticism of a too expensive process, the following are things that a company can do to make sure the process is cost-effective:
  • Embed the process, as far as feasible, into existing business processes, e.g. review strategic risk during strategic planning, hold ERM committee meetings as part of or right after other routine management meetings, monitor ERM progress during normal performance management reviews, etc.
  • Assign liaisons to ERM in the various business units and functional departments who have other roles that complement risk management.
  • Do not try to boil the ocean; keep the ERM process focused on the most significant risks the company faces.
  • Measure the value that ERM brings, such as reduction in suits or lower total cost of risk or whatever measures are decided upon by management.
In the author’s purview of ERM in various organizations, the function tends to be kept very lean (without diminution of its efficacy). If the above suggestions are adopted, along with other economical actions, the costs associated with the process can be kept in balance with the value or well below the value. Conclusion It is possible for an ERM process to be poorly executed, and thus deserve criticism. It is also possible for an ERM process to be well-executed and deserve nothing more than continuous improvement. The caution is that no one should expect perfection or suppose that one unanticipated risk that creates a loss denotes a total failure of this enterprise-wide process. Organizations are sometimes faced with situations that are beyond a reasonable expectation of being known or managed. It would be fair to lodge criticism of ERM under certain circumstances; for example, if an organization’s ERM process did not reveal a risk that all its competitors recognized as a risk and addressed. But even in that case, perhaps there were reasons to think the risk would not penetrate protections the organization already had in place. Suffice it to say, every process and situation must be evaluated on its own merits and within the proper context.

Donna Galer

Profile picture for user DonnaGaler

Donna Galer

Donna Galer is a consultant, author and lecturer. 

She has written three books on ERM: Enterprise Risk Management – Straight To The Point, Enterprise Risk Management – Straight To The Value and Enterprise Risk Management – Straight Talk For Nonprofits, with co-author Al Decker. She is an active contributor to the Insurance Thought Leadership website and other industry publications. In addition, she has given presentations at RIMS, CPCU, PCI (now APCIA) and university events.

Currently, she is an independent consultant on ERM, ESG and strategic planning. She was recently a senior adviser at Hanover Stone Solutions. She served as the chairwoman of the Spencer Educational Foundation from 2006-2010. From 1989 to 2006, she was with Zurich Insurance Group, where she held many positions both in the U.S. and in Switzerland, including: EVP corporate development, global head of investor relations, EVP compliance and governance and regional manager for North America. Her last position at Zurich was executive vice president and chief administrative officer for Zurich’s world-wide general insurance business ($36 Billion GWP), with responsibility for strategic planning and other areas. She began her insurance career at Crum & Forster Insurance.  

She has served on numerous industry and academic boards. Among these are: NC State’s Poole School of Business’ Enterprise Risk Management’s Advisory Board, Illinois State University’s Katie School of Insurance, Spencer Educational Foundation. She won “The Editor’s Choice Award” from the Society of Financial Examiners in 2017 for her co-written articles on KRIs/KPIs and related subjects. She was named among the “Top 100 Insurance Women” by Business Insurance in 2000.

How to Prevent IRS Issues for Captives

Domiciles have no responsibility to consider federal tax issues when licensing captives -- but they should do so anyway.

A regulator of captive insurance is responsible for many aspects of the business of captive insurance companies. He or she must coordinate the application process for obtaining a license, including the financial analysis and financial examination of each captive insurance company. The regulator is also a key marketing person in promoting the domicile as a favorable place to do business, thus fostering economic development for the state. The captive regulator is not, however, a tax adviser. No statute and regulation in any domestic domicile requires an analysis of the potential tax status of the captives under consideration or under regulation. If the application complies with the stated statutory and regulatory requirements, the regulator must favorably consider the application and allow the new company to be licensed as an insurance company under state law. That new insurance company may not, however, be considered an insurance company under federal tax law. The Internal Revenue Service recently listed captives as one of their annual “Dirty Dozen” tax scams, citing “esoteric or improbable risks for exorbitant premiums.” And at least seven captive managers (and therefore their clients) have been targeted for “promoter” audits, for allegedly promoting abusive tax transactions. Yet all of these captives received a license from a regulator, mostly in the U.S. Obviously these regulators did not consider the pricing of the risks to be transferred to the captive, except perhaps at the macro level. Should the domicile care about the potential tax status of licensed captives? David Provost, Vermont’s Deputy Commissioner of Captive Insurance, has said, “We do not license Section 831(b) captives; we license insurance companies.” While that statement is technically correct, this paper argues that, with respect to small captives, regulators should care about the tax implications of licenses in extreme cases, consistent, of course, with the laws and regulations under which it operates. Small captives, i.e. those with annual premiums of no more than $1.2 million, can elect under section 831(b) of the Internal Revenue Code to have their insurance income exempt from federal taxation. This provision, combined with certain revenue rulings and case law, creates a strong tax and financial planning incentive to form such a captive insurance company. This incentive can lead to an “over-pricing” of premiums being paid to the new captive, to maximize the tax benefits on offer. The premiums may be “over-priced” relative to market rates, even after being adjusted for the breadth of policy form, size and age of the insurance company and, in some cases, the uniqueness of the risk being insured by the captive. But “over-priced” in whose eyes? Insurance regulators are usually more concerned with whether enough premium is being paid to a captive to meet its policy obligations. From that perspective, “too much” premium can never be a bad thing. Indeed, captive statutes and regulations generally use the standard of being “able to meet policy obligations” as the basis of evaluating captive applications or conducting financial reviews. And actuarial studies provided with captive applications generally conclude that “…the level of capitalization plus premiums will provide sufficient funds to cover expected underwriting results.” These actuarial studies do not usually include a rate analysis, by risk, because none is required by captive statute or regulation. Small “831(b)” captives, therefore, may easily satisfy the financial requirements set forth in captive statutes and regulations. If, however, the Internal Revenue Service finds on audit that the premiums paid to that captive are “unreasonable,” then the insured and the captive manager may face additional taxes and penalties, and the captive may be dissolved, to the loss of the domicile. And, as has happened recently, the IRS may believe that a particular captive manager has consistently over-priced the risk being transferred to its captives and may initiate a “promoter” audit, covering all of those captives. Such an action could result in unfavorable publicity to the domiciles that approved those captive applications, regardless of the fact that the regulators were following their own rules and regulations to the letter. It is that risk of broad bad publicity that should encourage regulators to temper the rush to license as many captives as possible. There should be some level of concern for the “reasonableness” of the premiums being paid to the captives. One helpful step would be to change captive statutes or regulations to require that actuarial feasibility studies include a detailed rate analysis. Such an analysis would compare proposed premium rates with those of the marketplace and offer specific justifications for any large deviations from market. (Given the competition among jurisdictions for captive business, such a change would only be possible if every domicile acted together, eliminating the fear that a domicile would lose its competitive edge by acting alone.) Absent such a change, however, regulators still have the power to stop applications that do not pass the “smell test.” Most captive statutes require each applicant to file evidence of the “overall soundness” of its plan of operation, which would logically include its proposed premiums. If the premiums seem unreasonably high for the risks being assumed, the plan of operation may not be “sound,” in that it might face adverse results upon an IRS audit. Regulators are not actuaries and often have had little or no underwriting experience. They, therefore, could not and should not “nit-pick” a particular premium or coverage. But some applications may be so egregious on their face that even non-insurance people can legitimately question the efficacy of the captive’s business plan. Insurance professionals know from both experience and nationally published studies that the cost of risk for most companies is less than 2% of revenue. “Cost of risk” includes losses not covered by traditional third-party insurance, which are generally the type of losses covered by “small” captive insurance companies. If a captive regulator receives an application in which the “cost” of coverage by that captive is, say, 10% to 12% or more of the revenue of the insured, alarm bells should go off. That captive certainly would have plenty of assets to cover its policy obligations! But in the overall scheme of things, including the real world of taxation, that business plan is not likely “sound.” At that point, the regulator has a choice of rejecting the applicant, requiring a change in the business plan/premiums or demanding additional support for the proposed plan. We are aware of one case in which the captive regulator required the applicant to provide a rate analysis from an independent actuary when he received an application whose premiums did not appear reasonable. A rate analysis is not, of course, a guarantee that the IRS will find the premiums acceptable on audit. No one can expect guarantees, but a properly done rate analysis has a better chance of assuring all the parties that the captive has been properly formed as a real insurance company and not simply as a way to reduce the taxable income of the insured and its owners. Captive insurance regulators have a big job, particularly as the pace of captive formations increases. To protect the domicile from appearing on the front page of the Wall Street Journal, the regulator must consider all aspects of the proposed captive’s business, including, in extreme cases, its vulnerability to adverse federal tax rulings.

The Thorny Issues in a Product Recall

A product recall can devastate a company's reputation and cut market share -- even if it is handled perfectly and the brand is a great one.

|
In 1982, people in Chicago began dropping dead from cyanide poisoning, which was linked to Johnson & Johnson’s Tylenol in select drug stores. Johnson & Johnson immediately pulled all Tylenol from the shelves of all stores, not just those in Chicago. It was ultimately determined that the product had been tampered with by someone outside of Johnson & Johnson. But the company's aggressive actions produced a legend: The Tylenol scare was chalked up as the case to review for an effective brand-preserving (even brand-enhancing) product recall strategy. In 2011, though, the FDA took the extraordinary step of taking over three Johnson & Johnson plants that produced Tylenol because of significant problems with contamination. This time, Johnson & Johnson could not blame a crazed killer, only itself. A company that should have learned from its own celebrated case study had not retained that knowledge 30 years later. The problems associated with recalls often aren't the recall itself. In a recall, stores pull the products, and the media helps get the message to those who have already purchased the product to return them for refunds, replacement, repair or destruction. One problem crops up when companies are too slow to move. It was revealed in the press in June 2014, that GM allegedly knew of its ignition switch problems seven years before it recalled the product. The recall that began in February 2014 itself became tortuous as new models were added almost daily to the list of cars that were in danger of electrical shutdown while in motion. The press, the regulators and, of course, the lawyers pounced on GM for its alleged withholding of information for so long and for the seemingly endless additional recall of cars affected by the problem. In 2015, regulators have called meetings with GM and other auto manufacturers mired in what has become an epidemic of recalls to discuss why repairs are dragging on so long. Denial, lack of information, hunkering down (bunker mentality), secrecy, silo mentality and fears for the impact on the bottom line all contribute to disastrous recalls. With all recalls, there is the cost of the recall, the cost of complete or partial loss or loss of use of certain products, repair costs in some cases (GM), regulatory scrutiny and fines, class action and other lawsuits and the loss of potential income during any shutdown. These can all be big-ticket items, and some companies will not survive these expenses and loss of revenue. Probably the biggest cost of any recall is the cost to reputation, which can mean loss of existing and future customers. In recent years, lettuce growers and a peanut warehouse did not survive recalls over contaminated products. In the case of primary agricultural producers like growers and peanut warehouses, the processors simply change suppliers, leaving the primary producers without any customers. In the retail market, the competition for shelf space is high. Brands that are recalled that are new or that do not have high customer value are simply barred from shelf space, effectively destroying the ability to market their products. However, there are others that have strong brand following and even cult-like status in local markets. Blue Bell Creameries (famous for its ice cream) is one such company that has secured an almost cult-like following in the Southern and Midwestern states. Blue Bell, founded in 1907, maintains its headquarters in the small town of Brenham, TX (pop. 16,000). Problems began when hospitals in Arizona, Kansas, Oklahoma and Texas reported patients suffering from an outbreak of listeria-related diseases, some as early as 2010. Some reports included the deaths of patients. On May 7, the FDA (Food and Drug Administration) and CDC (Centers for Disease Control and Prevention) reported, “It wasn’t until April 2015 that the South Carolina Department of Health and Environmental Control during routine product sampling at a South Carolina distribution center, on Feb. 12, 2015, discovered that a new listeria outbreak had a common source, Blue Bell Chocolate Chip Country Cookie Sandwich and the Great Divide Bar manufactured in Brenham Texas.” Listeria is a bacteria that can cause fever and bowel-related discomfort and even more significant symptoms, especially in the young and elderly. Listeria can kill. Listeria is found naturally in both soil and water. Listeria can grow in raw and processed foods, including dairy, meat, poultry, fish and some vegetables. It can remain on processing equipment and on restaurant kitchen equipment, and when food comes in contact with contaminated equipment the bacteria finds a ready-made food source in that food and multiples. The FDA has issued guidance reports to food processors, preparers and restaurants on how to prevent listeria contamination. This includes proper preparation techniques, cleaning techniques, hygiene, testing and manufacturing and processing methodologies. Once Blue Bell understood that its cookie sandwiches and ice cream bars were implicated, the company immediately recalled the products. But soon it became evident to Blue Bell and others that this outbreak might not be limited to the ice cream bars or cookie sandwiches, and Blue Bell recalled all of its product and, to its credit, shut down all manufacturing operations. The FDA conducted inspections of Blue Bell plants, and in late April and early May produced reports on three plants, noting issues of cleanliness and process that were conducive to listeria growth. The FDA has also reported that Blue Bell allegedly had found listeria in its plants as far back as 2010 but never reported this to the FDA. As of this writing, Blue Bell plants are still shut down. The FDA investigation has come to a close, but many questions remain. The company has cut 1,450 jobs, or more than a third of its work force, and has said it will reenter the market only gradually, after it has proved it can product the ice cream safely. The question is whether these things Blue Bell has done: the quick recall, first of the problem products and then all products, and the closure of plants to mitigate contamination issues are enough to save Blue Bell from further damage in the eyes of consumers and the stores that sell the product. There are many tough questions to be answered going forward. In the intervening months, will competitors replace Blue Bell with their own products that consumers feel will compare favorably? If so, when Blue Bell products are returned to stores will consumers return, or has the stigma of listeria and the acceptance of the taste of comparable products weakened the brand? Will stores give Blue Bell adequate shelf space? And, does Blue Bell have enough of a cult following and viral fan base that once product is back in stores customers will return as if nothing had happened? These are the scary questions that affect all food and drug companies when recalls are from contamination in their own plants or those in their supply chain. The American consumer seems to have become numb to the endless succession of automobile recalls from just about all manufacturers. We dutifully return our vehicles to the dealer to fix a broken or faulty this or that. Even though many recalls involve parts or processes that could cause car accidents, injuries and deaths, it is as if we have come to accept faulty auto products as the norm. This is not the case with food-borne illnesses. The fact that a faulty car can kill as easily as a contaminated food product seems not to be an issue as people return again and again to buy new cars from the same car manufacturer that issued five recalls on their last purchased model. However, consumers will shun the food brand that made some people ill. This bifurcated approach to risk makes no sense even in the context of protecting children from harm. The faulty car that mom drives the kids around in every day may have the same probability of injuring or killing her child as the recalled food brand. She doesn’t abandon her car, but she bans the recalled food brand from her table. In 1990, Perrier discovered benzene in its sparkling water product. It quickly recalled all its product but then hunkered down into a bunker mentality. The lack of communication by Perrier about the problem and what it was doing exacerbated the fears of consumers, and the press speculation and outcry ran high. Perrier had always touted the purity of its water, so toxic benzene shattered this claim. Hunkering down reduced consumer confidence, and many left Perrier for suitable alternative products. Perrier has never regained the market share it had previously. Blue Bell has taken the time to do things right, to find the causes of the problem and take steps necessary to prevent contamination in the future. But time also means that existing or even new competitors with comparative products will try to fill the shelf space vacated by Blue Bell’s absence. You can be sure that other-region favorites with cult followings that could never before gain a foothold in Blue Bell’s territory have been pressuring retailers to try them out as a replacement for Blue Bell. Is the Perrier loss of market share inevitable for Blue Bell even if Blue Bell communicates adequately and with transparency? Time will tell. For now, Blue Bell not only has to fix the problems of plant cleanliness, it also needs to address emerging questions about its past operations, such as allegedly not reporting to the appropriate While we note the good press that surrounded the 1982 Tylenol (external-tampering) recall and have seen so far a good effort by Blue Bell to resolve its own plant contamination issue, ultimately it is contamination that is the problem. Companies can become complacent, let cleanliness slide, use outmoded procedures, not replace older equipment or even ignore warning signs and isolated contamination events. Regional and limited product line companies need to be especially cognizant that even though they have carved out a powerful niche in the marketplace, maintaining this niche is tenuous at best in the highly competitive world of food products. Cleanliness and contamination-free are assumed by consumers. Food processors and manufacturers must do everything possible to keep that assumption from becoming contradicted.

The Sad State of Continuing Education

Should it really be possible to spend minutes on a continuing education course and get hours of credit? One that's open book? On ethics?

About 25 years ago, I attended an education committee meeting at the Southern Agents Conference in Atlanta. Continuing education (CE) had really just gotten started in some states. At this meeting, legendary insurance educator Bob Ross, of the Florida Big I, literally stood on his chair at the conference table and declared that mandatory CE would be the death of quality education. Has his prediction come true? Four years ago, I posted the following on a LinkedIn discussion: "A colleague related a recent experience to me last week. He went to one of the best known online insurance CE web sites and signed up for a course titled "Consumer Insurance." He registered as a new user in the system, perused the course catalog, signed up for the course, skipped the course material, took the test, and earned 3 hours of CE credits. All in 16 minutes. "He was also able to save the exam and email it to me (and, of course, anyone else taking the course). The test was loaded with vaguely worded questions and misspelled words and insurance terms (like "vessals" and "ordinance IN law" coverage). For some test questions, no right answer was listed or more than one answer was correct. "In the spirit of one-upmanship, I told him about my experience 11 years ago when online CE was just getting started. I registered at a vendor’s web site and, like him, went straight to the test. I forget the exact total time required to register and take the 50-question test, but it was around a half hour I think and definitely less than an hour. The CE credit for this personal auto course? 25 HOURS. To quote the late Jack Paar, 'I kid you not.' "Afterward, I browsed the material, and it was full of general consumer-type information taken directly from the Insurance Information Institute. The hours of CE credit granted by the state DOI were based on a word count with complete disregard to the difficulty level. "One thing I remember about this vendor was that it used what it called “Split Screen Technology.” What that meant was, while you were taking the test on one side of the screen, you could view the course content that went with that test question topic on the right side and browse for the answer to the question. Browsing for the answer was easy, given that the relevant information was highlighted. "So where are we 11 years later? Apparently in the same boat, except that online insurance education is much more pervasive than it was then. You can get two years of CE credit for as little as $39.95. A great bargain if your interest is in regulatory compliance and not actually learning something that will benefit you, your agency and the consumers and businesses you serve...." "Is there no accountability? Is there no desire to truly educate ourselves? Does anyone care? Is anyone listening?" Flash forward to 2015…. An agent and friend I know – good agent, CE course instructor, upstanding guy – waited until the last minute to complete his biannual CE requirement last year. So he went online, found the course he wanted, signed up, went straight to the exam, and in 23 minutes had completed three hours of CE credits. As they say, the more things change, the more they stay the same. And, did I mention that the course was to comply with his state’s three-hour ETHICS requirement? There is an online insurance forum with a discussion called, “Any Suggestions on Best Online CE Site?” It has comments such as: “I use XXXXX.com. About $35 for 21 hours of credit. Takes a few hours (maybe two) to finish and is open book.” My tongue-in-cheek response (recalling my agent friend’s experience a few months earlier) was, “I hope it wasn’t an ethics course!” The poster's response: “Huh? I guess you think each hour of CE should take an hour? Unless it’s a LIVE CE class… CE courses don’t take that long. I get unlimited CE from [provider’s name] for $39.95 per year… including a 16-hour Ethics CE course… that takes me about 15 minutes to complete. And, yes, they are open-book courses, too.” On another discussion board, someone was touting a “Fast, Easy, and Affordable Continuing Education” website. No mention of the quality or relevance of the course material or whether there is any actual learning involved. The site proudly proclaims a passing ratio of “over 98%.” What would regulators do if the passing ratio of their licensing exams were more than 98%? I suspect they’d insist that the exams be made a little tougher. Is any exam a legitimate test of learning if the passing ratio approaches 100%? Then why do regulators allow online CE programs that take a half-hour to get 20 hours or more of CE credit and include exams with passing ratios near 100%? The web site in question has 91 reviews…NONE of them mention whether the reviewer actually learned anything. (If you're actually looking to learn, the best place to start looking is your own agent association, which has a vested interest in providing you with the best education possible.) So what do you think? Am I just a grumpy old man? Should anything be done about the diploma mills that have proliferated? If so, what? If not, why not?

Debunking 'Opt-Out' Myths (Part 1)

Some myths are based on misunderstanding -- some on misinformation spread by those with a vested interest in preserving a flawed system.

Those who believe in the current workers’ compensation system share objectives with those who believe that companies should have the ability to “opt out.” We all want quality care for injured workers, better medical outcomes, fewer disputes, a fair profit for insurance companies and the lowest possible costs to employers. However, supporters of “options” to workers’ compensation object to a one-size-fits-all approach to achieving these objectives. They want to be able to either subscribe to the current workers’ comp system or provide coverage to workers through other means. The Texas nonsubscriber option has proven beneficial for injured workers, employers and insurance carriers for more than 20 years. The Oklahoma Option has been in effect for one year and is delivering promised results for injured workers and employers, including lower workers’ compensation costs. Legislation to provide for options in Tennessee and South Carolina was introduced earlier this year. New laws need to be studied carefully. They take time to develop, understand and implement. Injury claims also take time to properly process and evaluate. That is part of the challenge. It takes time to develop the facts of every claim and to hear everyone’s story. The true test of whether a law or new system works is the outcomes it produces over time. Option opponents should take some time to review the results being achieved now in Texas and Oklahoma, and the fact that the Tennessee and South Carolina options are built upon the exact same principles that have led to happier employees and substantial economic development. To cover the issues related to workers’ comp options, I am writing an eight-part, weekly series. This overview is Part 1. The remaining seven will be: Part 2: Low-Hanging Fruit – Dispelling some of the most common myths about workers’ comp options Sometimes, these myths are simply because of misunderstandings. Sometimes, they are outright lies in a desperate attempt to maintain the status quo for workers’ compensation programs that are championed only by a subset of interested insurance carriers, regulators and trial lawyers. Part 3:  Homework and Uninformed Hostility Everyone complains about the inefficiencies, poor medical outcomes, cost shifting and expense of workers’ compensation systems until a viable, proven solution is presented. Then, suddenly, everyone loves workers’ comp? It’s time to take a breath and look at some homework. Part 4: Option Impact on Workers’ Compensation Systems and Small Business Does an option force employers to do anything? Does an option force changes to the workers’ compensation system? Are all workers’ compensation carriers opposed to options? Should past workers’ compensation reforms just be given more time to take hold? Do options hurt the state system by depopulating it of good risks? Do options increase workers’ comp premiums for small business? Is the option just for big companies, and they all elect it? Part 5: Litigation Uncertainties Are Texas negligence liability claims out of control?  Should Oklahoma Option litigation delay other state legislatures? Should Oklahoma Option litigation further delay employers from electing the option? Does an option create animosity between business and labor? Part 6: Option Program Transparency and Other “Checks and Balances” Are immediate injury reporting requirements unfair? Are option benefits simply paid at the discretion of the employer? Are option programs “secretive” and provide no “transparency?” Are there other “checks and balances?” Part 7: Option Program Benefit Levels and Liability Exposures Are option benefits less than workers’ compensation benefits? Are option benefits less than workers’ compensation because of taxes?  Where do the savings come from? Part 8: Impact on State and Federal Governments Do option programs shift more cost to state and federal governments? Do option programs increase state and federal regulatory costs? Do option programs give up state sovereignty over workers’ compensation?

Bill Minick

Profile picture for user BillMinick

Bill Minick

Bill Minick is the president of PartnerSource, a consulting firm that has helped deliver better benefits and improved outcomes for tens of thousands of injured workers and billions of dollars in economic development through "options" to workers' compensation over the past 20 years.