Download

A Crucial Role for Annuity 'Structures'

Given the tax-exempt status and safety of structured annuity settlements, is any post-injury financial plan complete without one?

Every year millions of injured Americans confront critically important financial decisions as their personal injury litigation draws to a close. In planning the path forward and beyond their injuries, the stability and security of ongoing, lifelong income from their settlement, judgment or award proceeds becomes absolutely paramount. The money simply needs to last. Only one post-injury investment option - - structured settlement annuities or “structures” - - provides a continuing tax exemption for the growth of such benefits.  If the injured individual agrees to a lump sum settlement, the tax exemption for lifelong income disappears. As of 1983, the Periodic Payment Settlements Act (see also IRS regulation Sec. 104(a)(2)) has made all income from a structured settlement annuity over the lifetime of that individual entirely, unequivocally and absolutely tax-free. Contrasted with a lump sum payment in which only the initial payment is tax free and all subsequent earnings are subject to all applicable forms of state and federal taxation, the structured settlement is considered an insurance policy for payments rather than asset to be taxed upon growth. This view, accepted by Congress in that 1983 Act, makes the value of a structure staggering. For example, if the injured individual deposits the lump sum settlement proceeds in a bank account, any interest earned would be taxed accordingly. If the injured individual invests the money in taxable bonds or stocks, the interest and dividends would also be classified as taxable income. However, with structured settlement annuity payments, neither the growing COLA payments nor the lump sum scheduled payments, nor the payments beyond life expectancy are ever taxable. If the injured party were to decide on an annuity payout after receiving the funds, the tax benefit would be lost because the funds were accepted separately from the settlement. The critical element is that the structure must be accepted as the payout vehicle initially. The Tremendous Value of Tax-Free Status The tax-free effect is quite dramatic. Consider an injured individual in the 28% tax bracket with a 2% fee for a traditional, market-based investment portfolio. In addition to having the risk of a significant reduction or entire loss of funds, the individual's income from the investment when they are successful will face federal taxes that can reduce actual net income by 30% before accounting for state and local taxes that could tack on another 5% reduction. None of these risks or reductions exists with income from a structured settlement. For an individual in the 10% tax bracket, earning a 4% return would have the equivalent pre-tax return of 4.44%, and a 15% tax bracket would mean an equivalent pre-tax return of 4.7%. While a peripheral advantage, the tax-free nature of the structure payout means the individual recipient is not required to deal with the timing and accounting issues associated with the need to pay estimated taxes on this money. With a taxable event, the taxes would be the quarterly responsibility of the recipient. An error in dealing with the estimated taxes could create recurring tax problems. Therefore, structures not only safeguard the injured individual from the volatility of the stock market, they provide continuing income that one can count on down to the penny and to the day. No wild, market-swing surprises. No reductions in income for taxes. No tax filings and accounting homework. The Gift That Keeps Giving (and Gives in Other Ways) In addition to the tax-free opportunity, there are other critical reasons to value the structured settlement for the injured individual. First and foremost, the structure enables the individual to couple the tax advantages with the capacity to schedule weekly income and significant payouts for any future expenses like college tuition, wedding costs or retirement needs to the day and to the penny without any worry about market or 401(k) performance. In addition, because structures are considered a policy for payment rather than an asset, such proceeds generally do not affect eligibility for needs-based public assistance programs like AFDC or Medicaid, as lump sums do. Even if the injured individual is not on Medicaid at the time of the injury, eligibility for many programs -- in-facility care, for example -- often requires the absence of any significant assets.   As a policy rather than an asset, structure income would be immune from eligibility consideration. Lump sums such as an investment account or a bank account are highly likely to be considered assets that must be eliminated for Medicaid eligibility. Quite simply, structures may very well be the best way to make sure that the money is peace-of-mind predictable, maximizes other income and benefits and lasts for a lifetime. However, with only 5% of eligible dollars placed in structured settlement annuities, billions in tax exemptions -- as well as the opportunity for continuing income security -- are squandered every year. Is a Post-Injury Financial Portfolio “Balanced” Without a Structured Settlement? While frequently considered an all-or-nothing option, the structured settlement annuity can be used for whatever portion of the settlement, judgment or award that the injured individual chooses.  As with all responsible portfolio plans, balance is a critical value. With a structure, an injured individual can tailor and fund her entire financial future. In addition to continuing payments, what is scheduled today will be there, exactly as needed, for a lifetime of tomorrows. It is possible to establish a college fund, for example, as part of the settlement that would both schedule and quantify tuition -- all tax-free. Given its value, security and stability, is any post-injury financial plan truly “balanced” without taking advantage of structured settlements? As a highly unusual, tax-free, benefits-exempted gift from the U.S. Congress to the nation’s injured individuals, structures should be a critical feature to secure their financial futures.

Ken Paradis

Profile picture for user KenParadis

Ken Paradis

Ken Paradis is CEO of Chronovo, a new force in the structured settlement annuity marketplace. Unwavering in his conviction that critical performance metrics, innovative product design, evangelical customers and inspired employees all fuel each other, Paradis has founded and led high-growth, niche-defining companies in the property and casualty industry.

3 Ways for Video to Reinvent Claims Work

Real-time video is making an impact, but video doesn't have to be live to make claims processing more efficient and satisfying to customers.

There are lots of great technologies and innovative products being developed that can help redefine the future of the claims process. Most recently, real-time video has been taking its place among industry disrupters such as drones, the Internet of Things and telemetries. We’ve seen Esurance, USAA and Erie Insurance adopting various video technologies. Yet, the use of video is still very narrow, focusing on real-time applications. In fact, there are three types of video capabilities, not just one, that can deliver a powerful opportunity to redefine the claims process.

Live or Real-time Video Streaming With the Insured

Live video streaming and video collaboration is one of the most critical pieces in being able to acquire a quick visual of the claim from the hands of the policyholder at first notice of loss (FNOL) or in any subsequent conversation. This technology has proven to drive significant efficiency savings by accelerating the collection of claim information, improving triage and even being able to estimate and settle claims remotely. The largest impact of real-time video on the claims process is made by enabling quick resolution of small claims. Each organization defines its own thresholds for what defines a small claim, but typically any claim above a certain threshold will still trigger a traditional field loss inspection. Yet, the insureds who are communicating from an area of poor connectivity or insureds who may not be comfortable using the video streaming technology to settle their claims will still require a field inspection. This is where there is an opportunity to apply video in another way to help streamline the field inspection process.

Field Video Claim Documentation

Unlike live video interaction, which is designed to help an inside claims professional see a transmission of what the policyholder is pointing at with a mobile device's camera, video documentation focuses on a different problem – how to improve the field documentation process and accelerate the collection, delivery and preparation of the claim report. A deeper look into the field operations shows that a field claims professional is overloaded with many responsibilities - traveling to the loss location, documenting the loss with pictures and preparing a report. With multiple assignments back to back, it becomes an almost impossible task to document and prepare a report one claim at a time. Instead, many claims are inspected with pictures and notes quickly taken on-site, and all the reports are prepared together once every day, every two days or even once a week. This approach delays the delivery of timely field information and hence delays getting the claims to closure. Claim cycle time is critical in ensuring high quality of customer satisfaction. Video claim documentation breaks up the claims handling process in two. It allows field claim professionals to focus on getting to the customer and conducting quality on-site inspections. Meanwhile, the inside claim teams can focus on processing the claim as soon as video content is delivered. To enable this process, the field claims professionals simply document the claim in video rather than pictures, speaking freely as they capture the video of the claim. Think of it as “visual voicemail” for claims. You may think that is nothing new. Everyone can take videos using smartphones. The challenge, however, is not whether video can be captured. The challenge is how the video content can be delivered into the business in a uniform and timely business process. This is where the right technological solution is needed to provide the means for field claims professionals to conveniently capture video in the field and deliver it to the inside teams. This means providing support for handling large video files, synchronizing video content, alerting about the arrival of new information and being able to support well-connected, low-bandwidth and “offline,” unconnected environments. What if the customer has already captured the claim on video? This scenario identifies the next workflow – customer self-service.

Customer Self-Service Videos

Studies found that customers who participate in self-service during an issue that is well-handled experience higher rates of satisfaction. They feel that they have been a direct, significant contributor to the positive resolution. Hence, allowing the customer to deliver video claim information to the insurance company is a big opportunity to increase customer satisfaction. Most customers capture loss information. Some take pictures. Others prefer to take video. Yet, most organizations do not have a convenient way to acquire large files from the customer. Typically, pictures are acceptable as long as they can be sent over email or uploaded online or through a mobile app. Acquiring a large video file from the customer frequently encounters technical limitations and requires a different approach. To address customer self-service demands, it is important to account for two types of scenarios. First, imagine the customer calling to report the claim for the first time, before having recorded any visual information. In this scenario, the customer is instructed how to most effectively record the data and how to make the video available to the claims handling team. The second scenario provides the insured an ability to conveniently upload video of the claim to the organization after it has already been captured.

Takeaway

There are numerous additional workflows that can benefit from applying video capabilities, like underwriting, supplemental claims and contractor quality review. The key is that the right technological platform needs to be able to support all three key video workflows to cover the main scenarios that are encountered in the claims process. This includes not just providing the mobile technology to help capture or deliver videos to the inside claims teams, but also a convenient way for the inside claim handlers to receive, access and review the video content to complete the reports and settle the claims.

Alex Polyakov

Profile picture for user AlexPolyakov

Alex Polyakov

Alex Polyakov is CEO of Livegenic, which delivers real-time video solutions to help organizations reduce costs, improve customer satisfaction and mitigate risks. Polyakov has more than 15 years designing enterprise solutions in many industries, including IT, government, insurance, pharmaceuticals and talent management.

Stocks: The Many Faces of Volatility

Amid all the volatility in prices of financial assets, it's important to understand the long-term trends -- which could favor stocks.

|||
The current year has been characterized by increasing daily volatility in financial asset prices. This is occurring in bonds as well as stocks. In fact, through the first six months of this year, the major equity markets have been trading within a narrow price band, back and forth, back and forth. Enough to induce seasickness among the investment community. The S&P 500 ended 2014 at 2058. On June 30, 2015, the S&P closed at 2063. In other words, the S&P spent six months going up all of five points, or 0.2%. Yet if we look at the daily change in the S&P price, the S&P actually traveled 1,544 points, daily closing price to daily closing price, in the first six months of the year. Dramamine, anyone? p3
Price volatility seems to have increased, but point-to-point percentage price moves have actually been very small. When looked at within the context of an entire bull market cycle, a 3.5% price move in either direction is close to a rounding error. This is the face of volatility we have experienced over the first half of 2015. Not quite as scary as is portrayed in the media, right? In one sense, what we have really experienced this year is what is termed a “sideways correction.” Financial markets can correct in any number of ways. We usually think of a correction in prices as a meaningful drop. That is certainly one form of a correction, and never much fun. Markets can also correct in sideways fashion. In a sideways correction, the markets go back and forth, often waiting for fundamentals of the economy and corporate earnings to catch up with prices that have already moved. The markets are digesting prior gains. Time for a “time out.” At least so far, this is what appears to be occurring this year. Make no mistake about it, sideways corrections heighten the perception of price volatility. That’s why it is so important to step away from the day to day and look at longer-term market character. A key danger for investors is allowing day-to-day price volatility to influence emotions, and heightened emotions to influence investment decision making. Two issues we do believe to be very important at this stage of the market cycle are safety and liquidity. We live in a world where central banks are openly debasing their currencies, where government balance sheets are deteriorating, where governments (to greater or lesser degrees) are increasing the hunt for taxes and where cash left in certain banking systems is being charged a fee (negative interest rates) just to sit. None of these actions is friendly to capital, which is why we see so much global capital on the move. It’s simply seeking safety and liquidity. Is that too much to ask? To understand where the money may go, it's important to look at the size and character of major global asset classes. In the chart below, we look at real estate and bond (credit) and stock markets. We’ve additionally shown the global money supply and gold. p2 One of the key takeaways from this data is that the global credit/bond market is about 2.5 times as large as the global equity market. We have expressed our longer-term concern over bonds, especially government bonds. After 35 years of a bull market in bonds, will we have another 35 years of such good fortune? Not a chance. With interest rates at generational lows, the 35-year bond bull market isn’t in the final innings; it’s already in extra innings, thanks to the money printing antics of global central banks. So as we think ahead, we need to contemplate a very important question. What happens to this $160 trillion-plus investment in the global bond market when the 35-year bond bull market breathes its last and the downside begins? One answer is that some of this capital will go to what is termed “money heaven.” It will never be seen again; it will simply be lost. Another possible outcome is that the money reallocates to an alternative asset class. Could 5% of the total bond market move to gold? Probably not, as this is a sum larger than total global gold holdings. Will it move to real estate? Potentially, but real estate is already the largest asset class in nominal dollar size globally. Could it reallocate to stocks? This is another potential outcome. Think about pension funds that are not only underfunded but have specific rate-of-return mandates. Can they stand there and watch their bond holdings decline? Never. They will be forced to sell bonds and reallocate the proceeds. The question is where. Other large institutional investors face the same issue. Equities may be a key repository in a world where global capital is seeking safety and liquidity. Again, only a potential outcome. We simply need to watch the movement of global capital and how that is expressed in the forward price of these key global asset classes. Watching where the S&P ultimately moves out of this currently tight trading range seen this year will be very important. It will be a signal as to where global capital is moving at the margin among the major global assets classes. Checking our emotions at the door is essential. Not getting caught up or emotionally influenced in the up and down of day-to-day price movement is essential. Putting price volatility and market movement into much broader perspective allows us to step back and see the larger global picture of capital movement. These are the important issues, not where the S&P closes tomorrow, or the next day. Or, for that matter, the day after that.

Brian Pretti

Profile picture for user BrianPretti

Brian Pretti

Brian Pretti is a partner and chief investment officer at Capital Planning Advisors. He has been an investment management professional for more than three decades. He served as senior vice president and chief investment officer for Mechanics Bank Wealth Management, where he was instrumental in growing assets under management from $150 million to more than $1.4 billion.

How to Think About Marijuana and Work

The short answer is: Just say no -- in a zero-tolerance policy, because safety must be paramount even as states legalize recreational marijuana.

With a flip of the calendar, on July 1, Oregon became the fourth state in which recreational marijuana use became legal. For many Oregon employers, this status change from illegal to legal wasn’t a big deal. Medical marijuana is already legal in 24 states, including the Beaver State, and possessing less than an ounce was decriminalized in Oregon 40 years ago. Recreational marijuana is just a new twist on an old story. All it really means is you can’t go to jail (or be fined) for smoking pot recreationally. However, this “non-event” has made risk managers ponder the ramifications of recreational use, especially for their employees who work in the manufacturing industry. Manufacturers have strict policies to ensure a safe work environment. It goes without saying that people who are under the influence at work in a manufacturing or an industrial setting are far more likely to be injured on the job. Being stoned at work should be treated no differently than being under the influence of alcohol or prescription medication. You certainly can’t show up drunk for work. The employer is responsible for that employee as soon as he walks on to the job. Any drug use that affects an employee’s ability to perform the job should be a genuine concern for the employer. The difficulty for employers is that there is no scientific method to determine a marijuana intoxication level, unlike a blood-alcohol level. Until there is definitive scientific evidence, employers are being advised to err on side of safety and forbid an employee to be under the influence of marijuana. To do that, the employer needs a crystal-clear, zero-tolerance policy. Unless the employer has been living in a cave the past 50 years, it already has such a policy. But it should be updated to specifically address marijuana use, both on the job and recreationally, because it could affect the employee’s job performance. It is predicted that in 2016 – the third election cycle in which marijuana legalization measures will be on ballots across the country – as many as seven more states could allow recreational use of marijuana. As each state approves the recreational use of marijuana, there looms in the background the knowledge, that under federal law, its use remains illegal. Whether that will eventually force the feds to take a stand remains to be seen. Right now, the feds have just rolled over to let you scratch their belly. But as each state joins the ranks of approving pot use recreationally, what was a minor irritant to the feds could grow too large for them to ignore. The bottom line is that a stoned CPA might drop a number or two, but a stoned assembly line worker might drop a few fingers. It doesn’t matter if the cause is pot, alcohol or prescription medication. Smoke cannabis at work – or show up stoned – and you’ll be disciplined. It’s not about a worker’s rights; it’s about workplace safety.

Daniel Holden

Profile picture for user DanielHolden

Daniel Holden

Dan Holden is the manager of corporate risk and insurance for Daimler Trucks North America (formerly Freightliner), a multinational truck manufacturer with total annual revenue of $15 billion. Holden has been in the insurance field for more than 30 years.

Would a Formulary Help in California?

A detailed analysis of the closed drug formulary in Texas suggests that the benefits that proponents claim would not materialize in California.

Introducing a closed pharmaceutical formulary into California workers’ compensation could produce two main benefits. The first is to further lower the cost of pharmaceuticals by either restricting or eliminating certain medications. The second is to reduce the possibility of drug addiction. An October 2014 California Workers’ Compensation Institute (“CWCI”) report titled, “Are Formularies a Viable Solution for Controlling Prescription Drug Utilization and Cost in California Workers’ Compensation” states that pharmaceutical costs could be reduced by 12%, or $124 million, by introducing the Texas workers’ compensation pharmaceutical formulary. To achieve the second benefit, an assembly member introduced AB1124 to establish an evidence-based medication formulary and wrote, “The central purpose of our workers’ comp system is to ensure injured workers regain health and get back to work. When workers get addicted to dangerous medications, goals of the program are not met. An evidence-based formulary has proven to be an effective tool in other states and should be considered in California.” To confirm whether these benefits could be achieved through the introduction of the Texas formulary, a review of the CWCI study and the opioid medications available under the Texas formulary was conducted. The findings, summarized below, suggest that the answer is no. Although California does not restrict or limit medications in treating injured workers, it does limit the prices paid and provides an opportunity to question prescribed medications that appear to be out of the ordinary. Medi-Cal prices (California’s Medicaid health care program) are used for establishing the maximum prices for workers’ compensation medications, in contrast to states such as Texas, which use the average wholesale price (AWP). A review of two cost-saving examples that referenced specific medications calculated projected savings based on CWCI’s ICIS payment data for prescriptions paid between Jan. 1, 2012 and June 30, 2013. The first example compared 50mg Tramadol prices from five different suppliers. The highest was $190, followed by $23, $18, $12 and $8 per script. Here, CWCI suggested that the manufacturer of the highest-priced script be removed from the California formulary. From mid 2009 through 2013, however, the unit price for 50mg Tramadol from the supplier of brand name Ultram and at least 10 other suppliers in California was nine cents, so the AWP for a script was $2. So, overpaying for medications is an issue even if the $190 supplier is removed. The Workers’ Compensation Research Institute (WCRI) also reported that California claims administrators paid a unit price of 35 cents for 5mg Cyclobenzaprine and 70 cents for 10mg while the unit price from Californian suppliers was 10 cents for 10mg and 15 cents for 5mg. Again, the prices suggest that California claims administrators were paying more than the maximum prices. Based on randomly selected manufacturers and strengths of the top 20 medications identified in the 2013 NCCI prescription drug study, California’s prices were on average 20% lower than the AWP and in some cases as little as 1/24th the cost. California prices were found to be at the lowest retail price range compared with those published on goodrx.com. Pharmacies located in Los Angeles, Miami and Dallas were used for comparison. Findings suggested employers in California workers’ compensation are paying no more than the general public for medications, whereas in Texas employers are paying more by using the AWP.
The second example compared script prices of seven opioid agonists, including Tramadol and Oxymorphone. Oxymorphone was the highest-priced script at $600 and Tramadol the lowest at $60 per script, suggesting a saving of as much as $540 if Tramadol were to be prescribed instead of Oxymorphone. But prescribing oxymorphone when tramadol could suffice or vice versa could be regarded as an act of gross negligence by the physician. On the World Health Organization (WHO) analgesic ladder, tramadol and codeine are weak opioids regarded as “step two” while acetaminophen and NSAIDs are “step one.” “Step three” opioids include medications such as morphine, oxycodone and oxymorphone, which all differ in their pharmacodynamics and pharmacokinetics, so choosing one or more to treat pain becomes a balance between possible adverse effects and the desired analgesic effect. Oxymorphone (stronger than morphine or oxycodone) is recommended for use only when a person has not responded to or cannot tolerate morphine or other analgesics to control their pain. A list of opioid medications published by Purdue Pharma was used to identify which opioids were excluded from the Texas formulary. The list of more than 1,000 opioid analgesics was prepared by Purdue to comply with the state of Vermont law 33 V.S.A. section 2005a, requiring pharmaceutical manufacturers to provide physicians with a list of all drugs available in the same therapeutic class. Being in the same class, however, does not necessarily mean they are interchangeable or have the same efficacy or safety. The list showed available strengths and included (1) immediate and extended release, (2) agonists such as fentanyl, oxycodone, hydrocodone, oxymorphone, tramadol, codeine, hydromorphone, methadone, morphine, tapentadol and levorphanol and (3) combinations such as acetaminophen with codeine, oxycodone with acetaminophen, oxycodone with asprin, oxycodone with ibuprofen, hydrocodone with acetaminophen, hydrocodone with ibuprofen, acetaminophen-caffeine with dihydrocodeine, aspirin-caffeine with dihydrocodeine and tramadol with acetaminophen. It appears that extended-release medications used for around-the-clock treatment of severe chronic pain have been excluded or are not listed in the Texas formulary, with a few exceptions. For example, 80mg OxyContin (Oxycodone) ER 12 hour (AWP $18, Medi-Cal $15) is excluded. 120mg Hysingla (Hydrocodone) ER 24 hour (AWP $41, Medi-Cal $34) is not listed. However, 200mg MS Contin (Morphine) ER 12 hour (AWP $31, Medi-Cal $26) and 100mcg Fentanyl 72 hour transdermal patch in both brand name and generic forms are approved under the Texas formulary. Immediate-release generic medications such as oxycodone, hydromorphone and hydrocodone with acetaminophen in all strengths are approved, but immediate-release hydrocodone with ibuprofen and oxymorphone in either immediate or extended release are excluded. Would the objective of AB1124 be achieved by utilizing the Texas formulary? The above review suggests it would not. All the opioid medications available through the Texas formulary have the potential to cause addiction and be abused, possibly leading to death either accidentally or intentionally. As an example, the executive director of the Medical Board of California has filed accusations against Dr. Henri Eugene Montandon for unprofessional conduct including gross negligence. His patient was found dead with three 100mcg fentanyl patches on his upper chest. The autopsy revealed he potentially had toxic levels of fentanyl, codeine and morphine in his bloodstream at time of death. These three opioids are available under the Texas formulary. An article published on the website www.startribune.com described the challenges in treating returning soldiers from combat duty. The article discusses Zach Williams, decorated with two Purple Hearts who was found dead in his home from a fatal combination of fentanyl and venlafaxine, an antidepressant. Venlafaxine in both immediate- and extended-release form is approved in the Texas formulary. In addition, the following statement was made in a 2011 CWCI study into fentanyl: “Of the schedule II opioids included in the Institute’s study, the most potent is fentanyl, which is 75 to 100 times more powerful than oral morphine.” The top 20 medications identified by the 2013 NCCI prescription drug study were also compared with the Texas formulary, and six medications were found to be excluded, including three extended-release opioids, OxyContin (Oxycodone), Opana ER (Oxymorphone) and the once-daily Kadian ER (Morphine). The twice-daily, extended-release morphine MS Contin, however, was approved. Flector, a non-steroidal anti-inflammatory transdermal patch used for acute pain from minor strains and sprains, was excluded, as was carisoprodol a muscle relaxant classified by the DEA as a Schedule IV medication (the same as Tramadol). The Lidocaine transdermal patch, which is a local anesthetic available in both brand name and generic. was also excluded. Lidocaine patches have been found to assist in controlling pain associated with carpal tunnel syndrome, lower back pain and sore muscles. Apart from carisoprodol, it would appear the remaining five were excluded from the Texas formulary because of their high price rather than concerns regarding their safety or potential for abuse.
The U.S. Food and Drug Administration (FDA) is responsible for the approval of all medications in the U.S. Its approved list is the U.S. pharmacy formulary (or closed formulary). California workers’ compensation uses this list for treatment and the Medi-Cal formulary for medication pricing. In comparison, Texas workers’ compensation uses its own formulary, which is a restricted list of FDA-approved medications, and pays a higher price for approved medications than California's system does. Implementing an evidence-based formulary, such as in Texas, may result in an injured worker's not having the same choice of medications as a patient being treated for pain under California’s Medicaid healthcare program. How can this be morally justified? Will we see injured workers paying out-of-pocket to receive the medications necessary to control their pain? Claims administrators can greatly reduce pharmaceutical costs through their own initiatives by (1) ensuring that they pay no more than the Department of Industrial Relations (DIR) published price for a medication, (2) ensuring that physicians within their medical provider network (MPN) treat pain using the established pharmacological frameworks such as the WHO analgesic ladder, (3) ensuring that quantities and medication strengths are monitored, along with how a person has responded to analgesics, (4) ensuring that, when controlling pain with opioids, there is a heightened awareness for potential abuse, misuse and addiction, (5) establishing a multimodal pain management regimen including non-pharmacological therapies such as acupuncture, aerobics, pilates, chiropractic and physical therapy tailored to a person’s medical condition and, (6) for chronic pain, considering introducing an Internet-delivered pain management program based on the principles of cognitive behavioral therapy. The progress of many of these initiatives can be automatically monitored through a claims administrator’s technology solution, where a yellow or red flag is raised when prices paid exceed the legislated maximum amounts, when a pharmacological step therapy or progressive plan has been breached or when non-pharmacological therapy goals have not been achieved. Using these initiatives, as opposed to restricting specific manufacturers or medications through a closed formulary, will undoubtedly yield a far better outcome for the injured worker and lower the cost to the employer, benefiting all involved.

John Bobik

Profile picture for user johnbobik

John Bobik

John Bobik has actively participated in establishing disability insurance operations during an insurance career spanning 35 years, with emphasis on workers' compensation in the U.S., Argentina, Hong Kong, Australia and New Zealand.

Credit Reports Are Just the Beginning

Monitoring your credit report is an important way to detect identity theft, but it is reactive, not proactive. Protection needs to be more aggressive.

Credit reports are a vital part of identity theft discovery and resolution, from revealing fraudulent transactions to removing errors and repairing a victim’s reputation. While that makes monitoring your credit reports for suspicious activity a valuable exercise, credit monitoring is reactive, not proactive. People who are considering using an identity theft protection service should make sure that the plan includes but is not limited to credit features, and pay attention to how credit reports are involved. What is a credit report? A credit report is a record of credit history that includes inquiries and information on things like credit cards, loans, bank accounts and payment history. It also includes personal information like addresses, employment history and Social Security information. Matters of public record may also appear on reports (e.g., liens or child support). Why is a credit report important? Recorded data is used to create a credit score, which speaks to a person’s creditworthiness and level of financial responsibility. The higher the credit score, the better, and it all boils down to what’s on the credit report. For example, a history of consistent timely payments boosts a person’s score, whereas, a delinquent account or bankruptcy lowers it. To top it all off, one in four people find errors or inaccuracies on their credit reports that are hurting their credit scores. Bottom line: Reviewing reports will help people figure out what they need to fix financially to improve their credit score, so they can get a better mortgage rate, raise their line of credit, etc. How do you get a credit report? Every person is entitled to one free credit report every 12 months, from each of the major credit bureaus (Experian, TransUnion and Equifax). People can request reports directly from the bureaus’ websites or through annualcreditreport.com. What does a credit report have to do with identity theft protection? Credit reports play a major role in discovering/resolving identity theft; for instance, someone might review a credit report and see a line item describing a bank account or line of credit that he didn’t open or authorize. The reported data could then be used to contact creditors and file fraud claims, dispute the information with credit reporting agencies and prevent further misfortunes by instating a credit freeze and extended seven-year fraud alert. Why should credit monitoring be included in an identity theft protection plan? Given the involvement of credit reports with finding and fixing fraudulent activity, credit monitoring is a vital feature. Credit monitoring will help catch problems faster than victims can on their own. On average, one in three identity theft cases involves financial or credit fraud, and not many people make a habit of checking their reports to see if they’ve been affected. In fact, one in four people have never reviewed their credit reports. Considering that more than nine million people become victims each year, just imagine how many of those people don’t even know they’re in danger. How do services incorporate credit features? Essentially, most products include two major features for credit monitoring. First is pulling credit reports from one or all three of the major credit reporting agencies. How often members can view or get an updated version of their credit report depends on the product: some are every 60 days, each quarter, once a year, etc. Keep in mind that getting all three reports through a protection plan generally costs more for the member, and it’s often limited to once a year. The second part is monitoring changes to your credit report(s), as well as monitoring credit activity that eventually ends up on the credit report. Members receive alerts for anything suspicious. Credit features may or may not include assistance from a Certified Credit Report Reviewer (CCRR), Certified Credit Counselor (CCC), Certified Identity Theft Risk Management Specialist or the like, as well as resources like letter templates for security freezes, victim statements, disputes and claim forms. What to be aware of… Credit monitoring is reactive, not proactive. If fraudulent activity shows up in a credit report, it means the fraudster already has that person’s information and is using it. Reviewing credit reports is just one part of finding and fixing fraud. There are so many other steps that must be taken once evidence of identity theft has been found – so much that a protection plan can’t be limited to annually reviewing credit reports. What's the answer? The answer is a comprehensive identity theft protection plan (such as we provide) that includes credit monitoring with real-time alerts, access to personal advisers and a secure personal dashboard with the member’s credit report and score. Members should be able to see their dashboards 24/7 and refresh their credit reports every 60 days. Features should include monitoring high-risk transactions involving a member’s Social Security Number, monitoring personal identifiable information and monitoring credit and debit card numbers in online and national databases. Available resources should include lost purse/wallet assistance, an emergency response kit, forms and letter templates.

Brad Barron

Profile picture for user BradBarron

Brad Barron

Brad Barron founded CLC in 1986 as a manufacturer of various types of legal and financial benefit programs. CLC's programs have become the legal, identity-protection and financial assistance component for approximately 150 employee-assistance programs and their more than 15,000 employer groups.

Digital Is Not Enough; Nor Is Paperless

Don't think about just using insurance technology to connect to your customers. Think about connecting your risk management team.

The service of risk management within insurance companies needs to innovate. Today, a small fraction of commercial customers take advantage of risk management services provided by insurance agencies. And insurance companies are fine with this, as they have limited supply -- or people -- that can provide risk management services. But what if the same high level of risk management services could be offered to all customers of an insurance company? How would an insurance company go about offering widespread, and high-quality, risk management services? The Solution to Better Risk Management Is Your People (Plus Technology) Insurance agencies currently engaged in risk management services have a distinct advantage: the accumulated knowledge of its people that provide contract reviews for customers. I had this epiphany as I was reading through a slidedeck titled "Innovation is almost impossible for older companies," which states: "People have acquired skills that, at moments, have given significant advantages to companies in order to prosper." Insurance agencies now must figure out how to harness the risk management skills of its people in new ways. The alternative is scary for my insurance professional friends, because someone else -- someone with new technology and a new supply of risk management knowledge -- will figure it out instead. Insurance companies could quickly be out-innovated, as occurred to the taxi industry. For some time, the taxi industry had skills that allowed it to prosper. Taxi companies used technology and money to set up phone numbers that could be called to request a ride; these companies also stockpiled just enough cars and drivers to meet the minimum level of demand. But then Uber came along and created a better technology that connected riders to a different (and bigger) pool of drivers. The taxi industry got out-innovated. Insurance agencies are composed of people who have acquired risk management skills. My friends in the industry can review contracts with the best of them. But each of them has a limited capacity to complete contract reviews based on hours in the day. So not all customers get risk management services (either because they don't know about them or don't want to pay for them). A technology will come along that will expand the supply of risk management services. One insurance consultant thinks that technology will be a computer avatar that analyzes and predicts risks independently. I think the idea of an independently functioning risk management avatar is misguided. I am reminded of a quote from Zero to One, written by the founder of Paypal, Peter Thiel: "Better technology in law, medicine and education won't replace professionals; it will allow them to do even more." Better Technology Will Allow Insurance Professionals to Do More I continue to be drawn to the word "collaboration" as I envision the future of insurance technology. Recently, I spent time evaluating software solutions in the insurance industry. All of the solutions I reviewed are focused on step one, what I call "Make it Digital." Only within the last five to 10 years have insurance carriers and agencies gone paperless, and the insurance software companies are filling this need. Digital is not enough. Paperless is not enough. Insurance technology must connect people and the knowledge that they create. Don't think about just connecting to your customers. Think about connecting your team. Imagine if your entire risk management team could work as a living, breathing entity to assess and evaluate risk. When Agent Jim in Kansas City has a question about liquidated damages in Texas, he should be able to quickly identify work completed by Agent Bob in Dallas dealing with this exact issue. He can then evaluate the work and bring Bob in on any follow-up questions. I have yet to find an insurance carrier or agency that has figured this out. This is where the opportunity lies in insurance technology: collaboration.

How Quote Data Can Optimize Pricing

Insurers must follow the lead of airlines and retailers and use quote data to fine-tune prices and features based on each customer's situation.

Retailers do it. Auto dealers do it. From wholesale parts suppliers to craigslist sellers and kids with lemonade stands, everyone knows that if you are going to take the trouble to sell something you should sell it for its full value. Many insurers, however, are stuck within semi-fixed pricing models that don’t allow them to capture the most profit they can from each policy. Today, insurers can change that because they have the ideal vehicle to help them optimize pricing and improve their margin — quote data. Quote data, when analyzed and tested on a continual basis and kept within the boundaries of the rate filing, can yield dramatic insights into purchase patterns and price tolerance. Plus, optimizing price with quote data is an analytics concept that will excite nearly everyone in the organization. Why should insurers consider using quote data to modify pricing or products? Insurers have actuarial models and underwriters who understand the market, plus they have rate plans that have already been filed for specific products. Quote data is ripe with excellent, relevant insights. The reason we see Google, Overstock and Amazon dipping into insurance quoting is because they grasp the potential in marrying purchase pattern data with price testing. For insurers, quote data tested against purchase patterns is a gold mine waiting to be tapped. What do insurers have to gain?
  • New data yields new insights and can result in new decisions. (The ability to analyze multiple risk factors, even at the quote stage, is improving.)
  • Insurers can decide to charge more based on what they learn.
  • Insurers can decide to go after lower-margin, high-quality business.
  • They can go after low-margin, high-efficiency business.
  • They can identify business that they don’t really want.
  • They can answer the competitive threats of new entrants that are poised to capture an increasing share of the market.
Is optimization the right way to make decisions? For the most part, the days of “from the gut” decisions are over. Human brains are predictable enough that they can be mined for decision data and yield well-patterned insights across similar individuals with similar decision patterns. Amazon, Pandora and Google can effortlessly predict a consumer’s next areas of interest and likely purchases without the individual ever telling them anything. The messages we receive from nearly everywhere are “optimized” because they are proven to most likely produce a positive reaction from us. Optimization is data science that works. Pricing is the second step of optimization; it concerns itself with how much a certain type of prospect will pay at that point in time through that particular channel. As an example, consider a couple purchasing a boat two days before Memorial Day weekend. They are in the showroom using a quote aggregator on her mobile phone. They may be willing to pay more for insurance because of the need to move through underwriting quickly. Quote data over time may also prove that two boater certification questions need to be added to the quote process for first-time boat purchasers to keep the product profitable, either through adjusting price or filtering out applicants. Insurers have a leg up on traditional online retailers because prospects do tell us something about themselves before they purchase, to get an accurate price. This kind of pricing optimization isn’t limited to online purchases. It can be done through agency channels and even through traditional direct mail. But the best data accessibility and ability to test is through website and mobile channel metadata. How insurers optimize price — finding opportunities among the limits. There are several areas for insurers to consider when optimizing through quote metrics. First, insurers should be tracking every bit of data and metadata surrounding the application. Every submission document has the bits of a consumer story to tell. For example, how many days is it until renewal? Is a client making a last-ditch effort to get better auto pricing with you before turning elsewhere? Is a prospect shopping around in the last week before her home policy auto-renews? How many apps are coming through a particular channel in a particular day? All of these questions and many more could lead to pricing revisions based upon consumer behavior in the application process. Next, insurers should become highly adept at A/B testing. Consider variables as levers and raise and lower them to reach their limits, then continue monitoring and adapting. For example, begin with quote take-up rates on all submissions. Insurers should consider testing the limits available to the market. Do take-up rates improve when limits are raised? Website metadata can be informative in this regard, as well. What pages do consumers visit and when? Is there a standard path for the person who seems to rush through shopping, quoting and purchasing? Can the insurer raise the price for those who seem to decide quickly in their first visit and lower it for someone who has come back to the site repeatedly, conceivably price shopping? There are hurdles, however. Price testing must be done within the boundaries of the filing and the specific products. Some pricing changes may be able to be implemented immediately, but many will need to go back through the filing process. Pricing always has to happen within the regulatory box, so what is possible in testing may not always be feasible in pricing. But pricing optimization is only one part of the A/B testing equation when it comes to quoting. Quoting data can also be used to more finely tune risk factors and their relationship to take-up rates and claims. This kind of profit optimization is just as critical as pricing optimization, and it requires no regulatory refiling. It is data that can be fed back into actuarial models and may ultimately be useful when used in conjunction with mobile telematics data and a host of other data sources. Even if an insurer planned no immediate repricing of products, the ability to understand price tolerance based upon other quote factors (e.g. age, income, take-up rates, property value) would be helpful in the development of new products. The nuts and bolts of pricing optimization will vary with each insurer’s unique quote process and current market. But the promise it holds is not only a better overall margin per policy, but also the potential to grow volume through unexplored insights and the opportunities to deeply understand individuals, groups and their motivations to purchase insurance. Consumer data analytics is here to stay. The value in quote data is continuing to grow.

John Johansen

Profile picture for user JohnJohansen

John Johansen

John Johansen is a senior vice president at Majesco. He leads the company's data strategy and business intelligence consulting practice areas. Johansen consults to the insurance industry on the effective use of advanced analytics, data warehousing, business intelligence and strategic application architectures.

How to Captivate Customers (Part 4)

When insurers get things right and captivate customers, they see a 34% increase in customer retention and a 37% rise in satisfaction.

ITL Editor-in-Chief Paul Carroll recently hosted a webinar on "Captivating Customers With All-Channel Experiences,” featuring experts from Capgemini and Salesforce.com and the former chief customer experience officer at AIG. To view or listen to the webinar, click here. For the slides, click here To see how important it is to provide a seamless, multi-channel experience that will captivate customers, look at our experience with a large North American property and casualty company. Revenue was falling. Too many customers were leaving. Customer service and the overall customer experience were lacking. Antiquated systems – both those facing the customer and the back-end, legacy infrastructure – needed to be modernized. The company began a multi-year transformation, starting with its auto insurance business unit, and then expanded to other areas. With our help, the company designed and deployed a “Quote to Card” capability across multiple channels. The solution provides real-time information by integrating internal and third-party systems. The insurer is now able to complete the “end-to-end” quoting process (build/rate/bind a quote) for both the direct-to-customer channel as well as the agent channel, in a much more efficient and elegant manner. The insurer incorporated a rich analytics component. As a result, it can perform robust online analytics, capturing information such as time spent by a prospect on the site, analyzing when and why a prospect is abandoning the quote process, etc. The insurer can also personalize the user experience, using results from the analytics platform coupled with advanced techniques such as caching and multivariate testing. Subsequently, the insurer added self-service capabilities for customers to conduct billing activities such as reviewing their account summary, paying bills, viewing payment history and updating personal profiles and other information. As a result of the initiatives, the insurer is now able to create a 360-degree view of its customers across sales and service. There has been a 34% increase in customer retention and a 37% increase in customer satisfaction. Meanwhile, costs are dropping. Average times for handling issues are dropping at call centers. Less time is needed to train agents, and their productivity is up 40%. More customers are using self-service channels. Fraud is also declining because the insurer can, for instance, see when people are trying to game the process by fiddling with numbers to get a better quote. Additional capabilities are still being added as part of the multi-year transformation road map. This is the fourth in a series of four articles adapted from the Capgemini white paper “Cloud-Enabled Transformation in Insurance: Accelerating the Ability to Deliver Exceptional Customer Experiences.” The other articles are here, here and here. For the full white paper, click here.

Bhuvan Thakur

Profile picture for user BhuvanThakur

Bhuvan Thakur

Bhuvan Thakur is a vice president within the Enterprise Cloud Services business for Capgemini in North America, UK and Asia-Pacific. Thakur has more than 18 years of consulting experience, primarily in the customer relationship management (CRM) and customer experience domain.


Jeffery To

Profile picture for user JefferyTo

Jeffery To

Jeff To is the insurance leader for Salesforce. He has led strategic innovation projects in insurance as part of Salesforce's Ignite program. Before that, To was a Lean Six Sigma black belt leading process transformation and software projects for IBM and PwC's financial services vertical.

Are Market Cycles Finally Ending?

Market cycles are diminishing greatly because sophisticated analytics let insurers price risks individually, not based on market psychology.

||
The property/casualty industry has been characterized by its market cycles since… well, forever. These cycles are multi-year affairs, where loss ratios rise and fall in step with rising and falling prices. In a hard market, as prices are rising, carriers are opportunistic and try to "make hay while the sun shines" – increasing prices wherever the market will let them. In a soft market, as prices are declining, carriers often face the opposite choice – how low will they let prices go before throwing in the towel and letting a lower-priced competitor take a good account? Many assume that the market cycles are a result of prices moving in reaction to changes in loss ratio. For example, losses start trending up, so the market reacts with higher prices. But the market overreacts, increasing price too much, which results in very low loss ratios, increased competition and price decreases into a softening market. Lather, rinse, repeat. But is that what’s really happening?

What’s Driving the Cycles?

Raj Bohra at Willis Re does great work every year looking at market cycles by line of business. In one of his recent studies, a graph of past workers’ compensation market cycles was particularly intriguing. chart1 This is an aggregate view of the work comp industry results. The blue line is accident year loss ratio, 1987 to present. See the volatility? Loss ratio is bouncing up and down between 60% and 100%. Now look at the red line. This is the price line. We see volatility in price, as well, and this makes sense. But what’s the driver here? Is price reacting to loss ratio, or are movements in loss ratio a result of changes in price? To find the answer, look at the green line. This is the historic loss rate per dollar of payroll. Surprisingly, this line is totally flat from 1995 to the present. In other words, on an aggregate basis, there has been no fundamental change in loss rate for the past 20 years. All of the cycles in the market are the result of just one thing: price movement. Unfortunately, it appears we have done this to ourselves.

Breaking the Cycle

As carriers move to more sophisticated pricing using predictive analytics, can we hope for an end to market cycles? Robert Hartwig, economist and president of the Insurance Information Institute, thinks so. “You’re not going to see the vast swings you did 10 or 15 years ago, where one year it’s up 30% and two years later it’s down 20%,” he says. The reason is that “pricing is basically stable…the industry has gotten just more educated about the risk that they’re pricing.” In other words, Hartwig is telling us that more sophisticated pricing is putting an end to extreme market cycles. The “what goes up must come down” mentality of market cycles is becoming obsolete. We see now that market cycles are fed by pricing inefficiency, and more carriers are making pricing decisions based on individual risks, rather than reacting to broader market trends. Of course, when we use the terms “sophisticated pricing” and “individual risk,” what we’re really talking about is the effective use of predictive analytics in risk selection and pricing.

Predictive Analytics – Opportunity and Vulnerability in the Cycle

Market cycles aren’t going to ever truly die. There will still be shock industry events, or changes in trends that will drive price changes. In "the old days," these were the catalysts that got the pendulum to start swinging. With the move to increased usage of predictive analytics, these events will expose the winners and losers when it comes to pricing sophistication. When carriers know what they insure, they can make the rational pricing decisions at the account level, regardless of the price direction in the larger market. In a hard market, when prices are rising, they accumulate the best new business by (correctly) offering them quotes below the market. In a soft market, when prices are declining, they will shed the worst renewal business to their naïve competitors, which are unwittingly offering up unprofitable quotes. chart2 Surprisingly, for carriers using predictive analytics, market cycles present an opportunity to increase profitability, regardless of cycle direction. For the unfortunate carriers not using predictive analytics, the onset of each new cycle phase presents a new threat to portfolio profitability. Simply accepting that profitability will wax and wane with market cycles isn’t keeping up with the times. Though the length and intensity may change, markets will continue to cycle. Sophisticated carriers know that these cycles present not a threat to profits, but new opportunities for differentiation. Modern approaches to policy acquisition and retention are much more focused on individual risk pricing and selection that incorporate data analytics. The good news is that these data-driven carriers are much more in control of their own destiny, and less subject to market fluctuations as a result.

Bret Shroyer

Profile picture for user bretshroyer

Bret Shroyer

Bret Shroyer is the solutions architect at Valen Analytics, a provider of proprietary data, analytics and predictive modeling to help all insurance carriers manage and drive underwriting profitability. Bret identifies practical solutions for client success, identifying opportunities to bring tangible benefits from technical modeling.