Tag Archives: united states

Innovation Spreads in U.S. State by State

Innovation can start simple. It does not always have to start with a big bang. We all know that doing anything in insurance across 50 states is often complex and expensive and a long process, but it is possible to begin with an idea around a new model, product, channel or line of business. Decide on which state(s) to start with, and then run with it. What a great way to launch and experience innovation.

This past year, we have seen new distribution models springing up on both coasts — auto in California and life in Massachusetts. Let your imagination take over, and you can envision innovation spreading across our industry… across every state in our nation… and beyond.

We have two great examples of East Coast and West Coast success stories. The West Coast example is Google Compare. Google is essentially acting as a partner for insurers — helping them become more customer-centric by taking distribution capabilities to where the customers are, on the platforms that customers are using. With a history of being very accurate at predicting what customers want, Google is bringing that strength to the insurance industry. Google Compare started with personal auto, for the state of California, and with only a few insurers — all with the goal to add more states, more lines and more insurer partners.

The East Coast reference to innovation in Massachusetts involves Haven Life. Haven Life is offering a totally-online customer service experience for buying term life insurance. In addition to ‘instantly’ quoting, Haven Life offers information and tools that help take the complexity and the complications out of the process of buying life insurance — making it a much quicker process and simpler decision for the customer.

The East/West examples demonstrate how creative thinking and novel approaches are delivering new, profitable and productive business models that will reshape tomorrow — by starting simple.

At SMA, we regularly track solution investment trends and the progress of maturing and emerging technology. There isn’t a day that goes by that I don’t see or hear about something exciting and inventive taking place in our industry. That wouldn’t have been true five years ago. This is a critical time for our industry — a time of great opportunities. It is a period when the possibilities are endless — thanks to a formidable combination of technology capabilities and changing customer expectations and behaviors.

Where Have the Hurricanes Gone?

Last year’s hurricane season passed off relatively quietly. Gonzalo, a Category 2 hurricane, hit Bermuda in October 2014, briefly making the world’s headlines, but it did relatively little damage, apart from uprooting trees and knocking out power temporarily to most of the island’s inhabitants.

It is now approaching 10 years since a major hurricane hit the U.S., when four powerful hurricanes — Dennis, Katrina, Rita and Wilma — slammed into the country in the space of a few months in 2005.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the US.

It shouldn’t be so quiet. Why? Put simply, the warmer the Atlantic Ocean is, the more potential there is for storms to develop. The temperatures in the Atlantic basin (the expanse of water where hurricanes form, encompassing the North Atlantic Ocean, the Gulf of Mexico and the Caribbean Sea) have been relatively high for roughly the past decade, meaning that there should have been plenty of hurricanes.

There have been a number of reasons put forward for why there has been a succession of seasons when no major storms have hit the U.S. They include: a much drier atmosphere in the Atlantic basin because of large amounts of dust blowing off the Sahara Desert; the El Niño effect; and warmer sea surface temperatures causing hurricanes to form further east in the Atlantic, meaning they stay out at sea rather than hitting land.

Although this is by far the longest run in recent times of no big storms hitting the U.S., it isn’t abnormal to go several years without a big hurricane. “From 2000 to 2003, there were no major land-falling hurricanes,” says Richard Dixon, group head of catastrophe research at Hiscox. “Indeed, there was only one between 1997 and 2003: Bret, a Category 3 hurricane that hit Texas in 1999.”

There then came two of the most devastating hurricane seasons on record in 2004 and 2005, during which seven powerful storms struck the U.S.

The quiet before the storm

An almost eerie calm has followed these very turbulent seasons. Could it be that we are entering a new, more unpredictable era when long periods of quiet are punctuated by intense bouts of violent storms?

It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.

“Not necessarily,” Dixon says. “Neither should we be lulled into a false sense of security just because no major hurricanes — that is Category 3 or higher — have hit the U.S. coast.”

There have, in fact, been plenty of hurricanes in recent years — it’s just that very few of them have hit the U.S. Those that have — Irene in 2011 and Sandy in 2013 — had only Category 1 hurricane wind speeds by the time they hit the U.S. mainland, although both still caused plenty of damage.

The number of hurricanes that formed in the Atlantic basin each year between 2006 and 2013 has been generally in line with the average number for the period since 1995, when the ocean temperatures have risen relative to the “cold phase” that stretched from the early 1960s to the mid-1990s.

On average, around seven hurricanes have formed each season in the period 2006-2013, roughly three of which have been major storms. “So, although we haven’t seen the big land-falling hurricanes, the potential for them has been there,” Dixon says.

Why the big storms that have brewed have not hit the U.S. is a mixture of complicated climate factors — such as atmospheric pressure over the Atlantic, which dictates the direction, speed and intensity of hurricanes, and wind shear, which can tear a hurricane apart.

There have been several near misses: Hurricane Ike, which hit Texas in 2008, was close to being a Category 3, while Hurricane Dean, which hit Mexico in 2007, was a Category 5 — the most powerful category of storm, with winds in excess of 155 miles per hour.

That’s not to say there is not plenty of curiosity as to why there have recently been no powerful U.S. land-falling hurricanes. This desire to understand exactly what’s going on has prompted new academic research. For example, Hiscox is sponsoring postdoctoral research at Reading University into the atmospheric troughs known as African easterly waves. Although it is known that many hurricanes originate from these waves, there is currently no understanding of how the intensity and location of these waves change from year to year and what impact they might have on hurricane activity.

Breezy optimism?

The dearth of big land-falling hurricanes has both helped and hurt the insurance industry. Years without any large bills to pay from hurricanes have helped the global reinsurance industry’s overall capital to reach a record level of $575 billion by January 2015, according to data from Aon Benfield.

But, as a result, competition for business is intense, and prices for catastrophe cover have been falling; a trend that continued at the latest Jan. 1 renewals.

We certainly shouldn’t think that next year will necessarily be as quiet as the past few have been.

Meanwhile, the values at risk from an intense hurricane are rising fast. Florida — perhaps the most hurricane-prone state in the U.S. — is experiencing a building boom. In 2013, permissions to build $18.2 billion of new residential property were granted in Florida, the second-highest amount in the country behind California, according to U.S. government statistics.

“The increasing risk resulting from greater building density in Florida has been offset by the bigger capital buffer the insurance industry has built up,” says Mike Palmer, head of analytics and research at Hiscox Re. But, he adds: “It will still be interesting to see how the situation pans out if there’s a major hurricane.”

Of course, a storm doesn’t need to be a powerful hurricane to create enormous damage. Sandy was downgraded from a hurricane to a post-tropical cyclone before making landfall along the southern New Jersey coast in October 2012, but it wreaked havoc as it churned up the northeastern U.S. coast. The estimated overall bill has been put at $68.5 billion by Munich Re, of which around $29.5 billion was picked up by insurers.

Although Dixon acknowledges that the current barren spell of major land-falling hurricanes is unusually long, he remains cautious. “It would be dangerous to assume there has been a step change in major-land-falling hurricane behavior.”

Scientists predict that climate change will lead to more powerful hurricanes in coming years. If global warming does lead to warmer sea surface temperatures, then evidence shows that it tends to make big storms grow in intensity.

Even without the effects of climate change, the factors are still in place for there to be some intense hurricane seasons for at least the next couple of years, Dixon argues. “The hurricane activity in the Atlantic basin in recent years suggests to me that we’re still in a warm phase of sea surface temperatures — a more active hurricane period, in other words. So we certainly shouldn’t think that 2015 will necessarily be as quiet as the past few have been.”

Storm warning

Predictions of hurricanes are made on a range of timescales, and the skill involved in these varies dramatically. On short timescales (from days to as much as a week), forecasts of hurricane tracks are now routinely made with impressive results. For example, Hurricane Gonzalo was forecast to pass very close to Bermuda more than a week before it hit the island, giving its inhabitants a chance to prepare. Such advances in weather forecasting have been helped by vast increases in computing power and by “dynamical models” of the atmosphere.

These models work using a grid system that encompasses all or part of the globe, in which they work out climatic factors, such as sea surface temperature and atmospheric conditions, in each particular grid square. Using this information and a range of equations, they are then able to forecast the behavior of the atmosphere over coming days, including the direction and strength of tropical storms.

But even though computing power has improved massively in recent years, each of the grid squares in the dynamical models typically corresponds to an area of many square miles, so it’s impossible to take into account every cloud or thunderstorm in that grid that would contribute to a hurricane’s strength. This, combined with the fact that it is impossible to know the condition of the atmosphere everywhere, means there will always be an element of uncertainty in the forecast. And while these models can do very well at predicting a hurricane’s track, they currently struggle to do as good a job with storm intensity.

Pre-season forecasts

Recent years have seen the advent of forecasts aimed at predicting the general character of the coming hurricane season some months in advance. These seasonal forecasts have been attracting increasing media fanfare and go as far as forecasting the number of named storms, of powerful hurricanes and even of land-falling hurricanes.

Most are not based on complicated dynamical models (although these do exist) but tend to be based on statistical models that link historical data on hurricanes with atmospheric variables, such as El Niño. But as Richard Dixon, Hiscox’s group head of catastrophe research, says:  “There is a range of factors that can affect the coming hurricane season, and these statistical schemes only account for some of them. As a result, they don’t tend to be very skillful, although they are often able to do better than simply basing your prediction on the historical average.”

It would be great if the information contained in seasonal forecasts could be used to help inform catastrophe risk underwriting, but as Mike Palmer, head of analytics and research for Hiscox Re, explains, this is a difficult proposition. “Let’s say, for example, that a seasonal forecast predicts an inactive hurricane season, with only one named storm compared with an average of five. It would be tempting to write more insurance and reinsurance on the basis of that forecast. However, even if it turns out to be true, if the single storm that occurs is a Category 5 hurricane that hits Miami, the downside would be huge.”

Catastrophe models

That’s not to say that there is no useful information about hurricane frequency that underwriters can use to inform their underwriting. Catastrophe models provide the framework to allow them to do just that. These models have become the dominant tools by which insurers try to predict the likely frequency and severity of natural disasters. “A cat model won’t tell you what will happen precisely in the coming year, but it will let you know what the range of possible outcomes may be,” Dixon says.

The danger comes if you blindly follow the numbers, Palmer says. That’s because although the models will provide a number for the estimated cost, for example, of the Category 5 hurricane hitting Miami, that figure masks an enormous number of assumptions, such as the expected damage to a wooden house as opposed to a brick apartment building.

These variables can cause actual losses to differ significantly from the model estimates. As a result, many reinsurers are increasingly using cat models as a starting point to working out their own risk, rather than using an off-the-shelf version to provide the final answer.

Terrorism Risk: A Constant Reminder

With just months to go until the year-end 2014 expiration of the government-backed Terrorism Risk Insurance Program Reauthorization Act (TRIPRA), the debate between industry and government over terrorism risk is intensifying.

The discussion comes in a year that marks the one-year anniversary of the Boston Marathon bombing—the first successful terrorist attack on U.S. soil in more than a decade. The April 15, 2013, attack left three dead and 264 injured.

Industry data shows that the proportion of businesses buying property terrorism insurance (the take-up rate for terrorism coverage) has increased since the enactment of the Terrorism Risk Insurance Act (TRIA) in 2002, and for the last five years has held steady at around 60% as businesses across the U.S. have had the opportunity to purchase terrorism coverage, usually at a reasonable cost.

However, should TRIPRA not be extended, brokers have warned that the availability of terrorism insurance would be greatly reduced in areas of the U.S. that have the most need for coverage, such as central business districts. Uncertainty around TRIPRA’s future is already creating capacity and pricing issues for insurance buyers in early 2014, reports suggest.

New Aon data show that retail and transportation sectors face the highest risk of terrorist attack in 2014. Both sectors were significantly affected in 2013, as highlighted by the Sept. 21, 2013, attack by gunmen on the upscale Westgate shopping mall in Nairobi, Kenya, as well as the Boston bombing.

The vulnerability of the energy sector to a potential terrorist attack has also been highlighted following an April 2013 assault on a California power station when snipers took down 17 transformers at the Silicon Valley plant.

The Boston Marathon attack—twin explosions of pressure cooker bombs occurring within 12 seconds of each other in the Back Bay downtown area—adds to a growing list of international terrorism incidents that have occurred since the terrorist attack of Sept. 11, 2001, and highlights the continuing terrorism threat in the U.S. and abroad.

Following 9/11, the 2002 Bali bombings, the 2004 Russian aircraft and Madrid train bombings, the London transportation bombings of 2005 and the Mumbai attacks of 2008 all had a profound influence on the 2001 to 2010 decade. Then came 2011, a landmark year, which simultaneously saw the death of al-Qaida founder Osama bin Laden and the 10-year anniversary of the Sept. 11 attacks.

While the loss of bin Laden and other key al-Qaida figures put the network on a path of decline that is difficult to reverse, the State Department warned that al-Qaida, its affiliates and adherents remained adaptable and resilient and constitute “an enduring and serious threat to our national security.”

A recently published RAND study finds that terrorism remains a real—albeit uncertain—national security threat, with the most likely scenarios involving arson or explosives being used to damage property or conventional explosives or firearms used to kill and injure civilians.

The Boston bombing serves as an important reminder that countries also face homegrown terrorist threats from radical individuals who may be inspired by al-Qaida and others, but have little or no actual connection to known militant groups.

In a recent briefing, catastrophe modeler RMS assesses that the U.S. terrorist threat will increasingly come predominantly from such homegrown extremists, who because of the highly decentralized structure of such “groups,” are difficult to identify and apprehend.

Until the Boston bombing, many of these potential attacks had been thwarted, such as the 2010 attempted car bomb attack in New York City’s Times Square and the attempt by Najibullah Zazi to bomb the New York subway system.

Other thwarted attacks against passenger and cargo aircraft indicate the continuing risk to aviation infrastructure. The investigation into the March 7, 2014, disappearance of Malaysia Airlines flight 370 over the South China Sea aircraft with 239 passengers has raised many concerns over the vulnerability of aircraft to terrorism.

RECENTLY THWARTED TERRORIST ATTACK ATTEMPTS IN THE U.S.
Source: Federal Bureau of Investigation (FBI); various news reports; Insurance Information Institute

Counterterrorism success in 2011 came as a number of countries across the Middle East and North Africa saw political demonstrations and social unrest. The movement known as the Arab Spring was triggered initially by an uprising in Tunisia that began back in December 2010. Unrest and instability in this region continues in 2014 and has spread to other parts of the world with violent protests seen most recently in Ukraine, Venezuela and Thailand.

Another evolving threat is cyber terrorism. The threat both to national security and the economy posed by cyber terrorism is a growing concern for governments and businesses around the world, with critical infrastructure, such as nuclear power plants, transportation and utilities, at risk.

All these factors suggest that terrorism risk will be a constant, evolving and potentially expanding threat for the foreseeable future.

For the full report on which this article is based, click here.

The FIO Report on Insurance Regulation

The December 2013 issuance of the Federal Insurance Office (FIO) report, How to Modernize and Improve the System of Insurance Regulation in the United States, may in hindsight be regarded as more momentous an occasion for the industry and its regulation than the muted initial reaction might suggest. History’s verdict most likely will depend on the effectiveness of the follow-up to the report by both the executive and legislative branches, but current trends in financial services regulation may serve to increase the importance and influence over time of the FIO even in the face of inaction in Washington.

Insurance regulation has traditionally been the near-exclusive province of the states, a right jealously guarded by the states and secured by Congress in 1945 after the Supreme Court ruled insurance could be regulated by the federal government under the Commerce Clause of the Constitution.

Any fear that the FIO report would call for an end to state regulation proved unfounded, but industry members might be well-advised to prepare for the eventualities that may result as the FIO uses both the soft power of the bully pulpit and the harder power of the federal government to achieve its aims. As the designated U.S. insurance representative in international forums that more and more mold financial services regulation, and as an arbiter of standards that could be imposed on the states, the FIO and this report should not be ignored.

Having met with the FIO’s leadership team, we believe there are concerns that uniformity at the state level cannot be achieved without federal involvement. We further believe the FIO plans to work to translate its potential into an actual impact in the near future, making a clear-eyed understanding of the report and what it may herald for insurers a prudent and necessary step in regulatory risk management.

The concerns

The biggest surprise about the FIO report may well have been that there were no surprises. There were no strident calls for a wholesale revamp of the regulatory system, and praise for the state regulatory system was liberally mingled among the criticisms.

The lack of any real blockbusters in the details of the FIO report may seem to lend implicit support to those who foresee a continuation of the status quo in insurance regulation. But, taken as a whole, this report and the regulatory atmosphere in which it has been released should be considered a subtle warning of changes that may yet come.

The report may quietly help to usher in an acceleration of the current evolution of insurance regulation. The result could be a regulatory climate that offers more consistency and clarity for insurers and reduces the cost of regulation. The result could also be a regulatory climate that offers more stringent regulatory requirements and increases both the cost of compliance and capital requirements. Most likely, the result could be a hybrid of both.

Either way, preparing to influence and cope with any possible changes portended in the report would be preferable to ignoring the portents.

Part of the disconnect between the short-term reception and the long-term impact of this report may be because of the implicit FIO recognition in the report of the lack of political will needed to enforce any real changes in current U.S. insurance regulation, most especially any that would require increased expenditures or personnel at the federal level. In our current economic and political environment, plugging gaps in state regulation by using measures that would require federal dollars may quite reasonably be construed to be off the table.

But the difference between identified problems and feasible solutions may offer an opportunity. States, industry and other stakeholders could act together to bring needed reform to the insurance regulatory system in a way that adds uniform national standards to regulation, reduces the possibility of regulatory arbitrage and maintains the national system of state-based regulation, all while recognizing the industry’s strengths and needs and not burdening the industry with unnecessary, onerous regulation.

There is much to praise in the current state regulatory system. A generally complimentary federal report on the insurance industry and the fiscal crisis of the past decade noted, “The effects of the financial crisis on insurers and policyholders were generally limited, with a few exceptions…The crisis had a generally minor effect on policyholders…Actions by state and federal regulators and the National Association of Insurance Commissioners (NAIC), among other factors, helped limit the effects of the crisis.”

While the financial crisis demonstrated the effectiveness of the current insurance regulation in the U.S., it is also evident that, as in any enterprise, there are areas for improvement. There are niches within the industry – financial guaranty, title and mortgage insurance come to mind – where regulatory standards and practices have proven less than optimal.

There are also national concerns that affect the industry. The lack of consistent disciplinary and enforcement standards across the states for agents, brokers, insurers and reinsurers is one obvious concern. Similarly, the inconsistent use of permitted practices and other solvency-related regulatory options could lead to regulatory arbitrage. At a time when insurance regulators in the U.S. call for a level playing field with rivals internationally, these regulatory differences represent an example of possible unlevel playing fields at home that deserve regulatory attention and correction.

A Bloomberg News story in January 2014, for example, quoted one insurer as planning to switch its legal domicile from one state to another because the change would allow, according to a spokeswoman for the company, a level playing field with rivals related to reserves, accounting and reinsurance rules.

For insurers operating within the national system of state-based regulation, one would hope that that level playing field would cross domiciles, and no insurer would be disadvantaged because of its domicile in any of the 56 jurisdictions.

But perhaps one of the greatest challenges to the state-based system of regulation is the added cost of that regulation, partly engendered by duplicative requests for information and regulatory structures that have not been harmonized among states. How to respond to that may represent the biggest gap in the FIO report. It may also be the biggest opportunity for both insurers and regulators to rationalize the current regulatory system and ensure the future of state-based regulation.

Cost

The FIO report notes that the cost per dollar of premium of the state-based insurance regulatory system “is approximately 6.8 times greater for an insurer operating in the United States than for an insurer operating in the United Kingdom.” It quotes research estimating that our state-based system increases costs for property-casualty insurers by $7.2 billion annually and for life insurers by $5.7 billion annually.

According to the report, “regulation at the federal level would improve uniformity, efficiency and consistency, and it would address concerns with uniform supervision of insurance firms with national and global activities.”

Yet the report does not recommend the replacement of state-based regulation with federal regulation, but with a hybrid system of regulation that may remain primarily state-based, but does include some federal involvement.

At least one rationale for this is clearly admitted in the report. As it says, “establishing a new federal agency to regulate all or part of the $7.3 trillion insurance sector would be a significant undertaking … (that) would, of necessity, require an unequivocal commitment from the legislative and executive branches of the U.S. government.”

The result of that limitation is a significant difference between diagnosis and prescription in the FIO report. Having diagnosed the cost of the state-based regulatory system as an unnecessary $13 billion burden on policyholders, the FIO's policy recommendations may possibly be characterized as, for the most part, the policy equivalent of “take two aspirin and call me in the morning.”

Still, as the Dodd-Frank Act showed, even Congress can muster the will to impose regulatory solutions if a crisis becomes acute enough and broad enough. Unlikely as that may now seem, the threat of federal radical surgery should not be what is required for states to move toward addressing the recommendations of the FIO report.

Indeed, actions of the NAIC over the past few years have addressed much of what is in the FIO report. Now the NAIC, industry and other stakeholders can take the opportunity provided by the report to work to resolve some of the issues identified in it. The possible outcome of an even greater federal reluctance to become involved in insurance regulation would only be a side benefit. The real goal should be a regulatory system that is more streamlined, less duplicative, more responsive, more cost-efficient and more supportive of innovation.

Kevin Bingham has shared this article on behalf of the authors of the white paper on which it is based: Gary Shaw, George Hanley, Howard Mills, Richard Godfrey, Steve Foster, Tim Cercelle, Andrew N. Mais and David Sherwood. They can reached through him. The white paper can be downloaded here

Minority-Contracting Compliance — Three Risks

On Jan. 13, 2014, the Department of Justice announced that two former executives of Schuylkill Products had been sentenced to two years in federal prison and forced to pay $119 million in restitution because of their role in what the FBI called the largest fraud involving the Department of Transportation’s Disadvantaged Business Enterprise (DBE) Program. A third individual, the owner of Marikina Construction, the firm that was used as a “front” in the scheme, received a prison sentence of nearly three years.

The sentencing of these individuals is not the result of an isolated incident. In recent years, federal prosecutors and the DOT inspector general have significantly stepped up enforcement of DBE and have brought several cases resulting in civil penalties and jail time. Some involved well-known international construction firms and their executives.

Here are three reasons why every contractor dealing with a federal, state or local minority contracting program needs to have proper compliance policies and procedures in place:

1.         Jail Time and Civil Fines

Contractors that do not comply with the DBE program’s rules and regulations face the very real threat of jail time and civil fines. According to the DOT, DBE fraud now represents more than one-third of the DOT inspector general’s open cases. From Oct. 1, 2003, through Sept. 30, 2008, investigations of DBE fraud allegations resulted in 49 indictments, 43 convictions, nearly $42 million in recoveries and fines and 419 months of jail sentences. From 2009 to 2010, the number of open investigations related to DBE fraud increased by almost 70%. The number of investigations shows no signs of slowing, as the DOT is aggressively hiring additional investigative agents.

Under several legal doctrines, a defendant can be held liable when the evidence shows that the defendant intentionally avoided confirming certain facts and learning the truth.

2.         Whistleblower Lawsuits

Under the Federal False Claims Act, every disgruntled employee is a bounty hunter. The act authorizes private individuals to bring a civil claim in the name of the U.S. against anyone who fraudulently obtained money or property from the government. The person who brings the action is entitled to 30% of the amount recovered for the government.

Contractors can become the target of a False Claims Act case if they submit payment applications to the government that falsely certify that a certain percentage of work was performed by DBE firms. Like in the criminal context, a contractor can still be liable even if it lacks actual knowledge of the DBE fraud. Reckless disregard for the truth or deliberate ignorance are sufficient.

3.         Bid Rejections and Challenges

Strict minority set asides or quotas are almost always unconstitutional. Disadvantaged business contracting programs, like the DOT’s DBE, are not quotas (a fact that DOT underlines in its regulations). Rather, they are goals that contractors must use “good-faith efforts” to achieve. In fact, many contractors would be surprised to know that a state transportation agency cannot reject a bid because it fails to include a commitment to subcontract work that meets or exceeds the stated DBE goal. However, for a bid to be accepted, the contractor must be able to demonstrate “good faith efforts” to meet the stated DBE contracting goal. Because most state procurement codes require the award of a contract to the lowest responsible and responsive bidder, failing to document adequate good-faith efforts is grounds for a state transportation agency to reject a bid or for challenge to be filed by a disgruntled bidder.

The risks that contractors face with not complying with minority contracting programs, particularly the DOT DBE program, literally cannot be ignored. At best, contractors that fail to comply with the program face significant financial ramifications in the form of fines, expensive lawsuits and lost projects. At worst, executives and employees can wind up in jail.