Download

Cyber Insurance at Inflection Point

What happens next will depend on how clearly underwriters, brokers and insurance buyers commit to building resilience.

A neon green outline of a city with buildings and also interconnected date lines all set against a black background

KEY TAKEAWAYS:

--Irresponsible competition, often driven by a desire to boost market share, is forcing prices down and softening terms and conditions for cyber policies. A softening market seems like good news for insurance buyers but inevitably leads to volatility in insurance rates and constrictions in coverage. This kind of rubber-band effect, with pricing that stretches and snaps back, destabilizes the market and removes risk transfer options for buyers and their risk advisers.

--What buyers, as well as carriers and brokers, should work toward is stability in rates and certainty on coverage, through a focus on improving cyber hygiene and increasing resilience.

----------

The impact of supply and demand on product pricing is a well-established economic principle – when supplies are high and demand is reduced, prices tend to fall. When it comes to cyber insurance coverage, this principle also applies, but there are good reasons that it shouldn’t.

Irresponsible competition, often driven by a desire to boost market share, is forcing prices down and softening terms and conditions for cyber policies. This is classic behavior that causes global market cycles in property and casualty insurance, and it has played out repeatedly in the past three decades. But this behavior ignores a bigger problem: Cyber is not a cyclical risk.

Businesses and the insurance industry find themselves at a turning point in the evolution of cyber risk management. What happens next will depend on how clearly underwriters, brokers and insurance buyers around the world see the risk that cyber events pose, and how committed they are to building resilience against this threat, thus ensuring a stable supply of coverage for the long term.

Why this turning point matters now

A softening market, in which prices fall and coverage terms relax, seems like good news for insurance buyers. This kind of market is especially welcomed by organizations that have experienced a market correction, which occurred in cyber insurance in 2020 and 2021 as ransomware attacks surged and loss ratios soared. Rate relief and easy capacity after a few years of steep increases can seem like a gift to buyers.

Unfortunately, the joy of short-term gain is almost always followed by longer-term pain. A soft market ultimately hurts policyholders because it inevitably leads to volatility in insurance rates and constrictions in coverage. This kind of rubber-band effect, with pricing that stretches and snaps back, destabilizes the market and removes risk transfer options for buyers and their risk advisers. It also isn’t limited to only one geography; this cyclical activity occurs in the U.S., Canada, the United Kingdom and across Europe.

Insurance pricing is intended to reflect the risks insurers assume in offering coverage. When risk is accurately priced, buyers gain valuable protection and insurers can achieve profit, which helps to keep the marketplace stable. It’s difficult for risk managers and cybersecurity professionals to explain to their executive teams why insurance costs and availability go up and down, and even more challenging to budget for that volatility.

In a world of cyber risk, stability and certainty are better for everyone. But irresponsible pricing and a lack of underwriting discipline undermine stability. Cyber risk remains intense, as the NetDiligence Cyber Claims Study 2022 and Resilience’s own 2022 Claims Report demonstrate. Since 2018, NetDiligence has found that the average recovery expense following a ransomware or malware attack has steadily increased for both small and medium-size enterprises (SMEs) as well as large companies.

An analysis of claims received by Resilience shows three major trends carrying forward from 2022 into 2023: the resurgence of ransomware; inadequate attention to common critical points of failure that lead to loss, such as phishing; and an increased focus on financial transfer fraud and third-party vendors instead of extortion-based cybercrime. In fact, Resilience saw a 300% increase in ransomware claims from the last two quarters of 2022 to the first quarter of 2023. 

If cyber risk is not declining, why should underwriters weaken their pricing, terms and conditions? The risk landscape in cyber suggests they should be doing the opposite.

See also: Cybersecurity Standards for Insureds Are a Must

What the industry should do next

When the insurance underwriters, brokers and the customers they serve arrive at an inflection point, they face a choice. They can decide to think and act strategically or opt for short-term results that probably won’t last. What the industry should do next, therefore, is take the following steps:

  • Reassess cyber risks and exposures. Some organizations have greatly improved their cybersecurity and thus enhanced their risk profile, so they might well merit a reduction in rates or access to greater coverage limits.
  • Maintain responsible pricing, terms and conditions that align with the customer’s risk. This approach puts the client’s interest ahead of short-term gains, which can lead to strong, long-term business relationships.
  • Focus on building cyber resilience. Effective cyber resilience requires quantifying an organization’s cyber risk and then implementing a combination of good cyber hygiene, protection and insurance that aligns to the risk. Connecting organizational silos in finance and security is foundational to building effective long-term resilience to cyber threats.
  • Change the mindset about cyber exposure. The cyber insurance marketplace has the tools, talent and data to shift its mindset from “price and pay” incident claims to “predict and prevent” cyber events. Resilience’s 2022 Claims Report found that despite reports of new threat actors and vulnerabilities, practicing cybersecurity fundamentals with cyber resilience as an investment strategy leads to significantly better outcomes for organizations and their insurers.

The current inflection point in cyber doesn’t have to destabilize the risk transfer market. Instead, it can be a turning point for greater partnerships – especially cooperation and collaboration between government and private-sector entities. It can be an opportunity to improve customer engagement and value and ease capacity restraints that deprive organizations of adequate coverage.

Most of all, this turning point can lead to a deeper commitment to cyber resilience.


Mario Vitale

Profile picture for user MarioVitale

Mario Vitale

Mario Vitale is president of Resilience, a cyber risk solution company.

Resilience was founded in 2016 by experts from across the highest tiers of the U.S. military and intelligence communities and augmented by prominent leaders and innovators from the insurance and technology industries. 

The Crisis in Flood Insurance

We may finally see consumers start to change their behaviors, either leaving risky areas or fortifying their homes and businesses.  

Image
Flooded basement

The flood insurance crisis in the U.S., which has been described as a "slow-moving hurricane," has made landfall, hitting Louisiana especially hard.

Rising premiums, to reflect soaring claims from natural catastrophes, are now hitting consumers hard enough that Louisiana and nine other states have sued to block increases in national flood insurance rates. Those increases are limited to "only" 18% a year but could eventually total more than 700% for many homeowners and businesses and cause an exodus from southern Louisiana, according to testimony at a hearing last week. 

Politicians can be expected to use regulation to protect consumers -- also known as voters -- as long as possible, but, beyond some short-term issues, insurers can't be forced to lose money. Government officials may also decide to subsidize homeowners' insurance policies, but that isn't a long-term strategy, either. Those taxpayers whose flood insurance premiums stay the same or even decline will resist subsidizing those who choose to live with greater risks.

Something has to give. We may finally see consumers start to change their behaviors, either leaving risky areas or fortifying their homes and businesses.  

The stats show why push has finally come to shove. Swiss Re reports, "From 2017 onwards, average annual insured losses from natural catastrophes have been over USD 110 billion, more than double the average of USD 52 billion over the previous five-year period.... In the coming decade, hazard intensification will likely play a bigger role as well as higher loss frequency and severity due to climate change." The number of $1 billion natural disasters (adjusted for inflation) has increased steadily each decade, from 3.3 a year in the 1980s, to 5.7 in the 1990s, 6.7 in the 2000s, 13.1 in the 2010s and 20 a year thus far in the 2020s. 

As a result, according to the New Orleans Times-Picayune, "The average Louisiana community is projected to see 134% increases for single-family homes, but communities especially prone to flooding will see much steeper hikes. One ZIP code in Plaquemines Parish is projected to see the highest increase in the nation, at 1,098%."

Those kinds of increases will get your attention.

An article in the Atlantic does a nice job of describing the tension that results. The headline is: "What Your Insurer Is Trying to Tell You About Climate Change." The subhead is even more to the point: "Insurers are trying to send a message. The government is trying to suppress it."

The article says the federal government generally discourages using its aid "to fundamentally alter how individuals behave, let alone how local and state governments function. In addition, after the largest disasters, Congress will typically approve multibillion-dollar relief funds, as it recently did after Hurricane Ian in Florida.... Protecting people in harm’s way is, I would argue, an essential part of the government’s job. But public officials are also shirking their responsibility to not leave communities vulnerable again and again."

In the short to medium term, insurers will catch loads of grief for raising premiums and canceling policies, and regulators and legislators may not be a lot of help. In California, for instance, legislators recently tried to tackle two obvious problems that have caused many insurers to stop writing homeowners insurance in the state -- insurers aren't allowed to include the cost of the reinsurance they purchase and can only use historical data for underwriting even as natural catastrophes increase in frequency and intensity -- but couldn't agree on a solution before adjourning until January.

In the long run, I think the real consequences will be borne by consumers and, to a lesser extent, taxpayers (groups that obviously overlap quite a bit). Insurers have to be able to price policies based on the risk involved, so consumers either need to reduce the risk or government needs to subsidize those risks if premiums are to remain affordable.

Consumers, in particular, can do a lot to reduce risk -- but not inexpensively. With new construction, it's fairly easy to avoid high-risk areas or to build on higher ground, to raise the elevation of the living quarters in a house, to build with materials that resist wildfire and high winds, to keep flammable landscaping farther from structures and so on. But the vast majority of homes, apartments and office buildings aren't new. They're decades old, often many decades old. So retrofitting involves a complicated calculation based on the cost and on the benefits from the reduction in risk.

Where insurers can help is by providing information. Insurers know a lot about risk, but, at the moment, they give policyholders pretty blunt feedback. We offer to renew, or we won't renew. We will renew, but with a premium increase of XX%. 

To the extent possible, insurers should tell policyholders, "We aren't renewing because...." Better yet, "Your premium is increasing XX% because... but you can reduce that increase if you do X, Y and Z." 

The lead example in the piece in the Atlantic has expertise in environmental issues and took an extensive series of measures that meant her house near Yosemite National Park passed "defensive space" inspections recommended by the state fire department -- but Allstate still canceled her policy and didn't tell her why. 

Imagine if insurers could coach homeowners, and their communities, about specific things they could to reduce their risk and, in the process, lower premiums.

That wouldn't be a panacea. There will still be loads of short-term issues as consumers and their protectors in government try to minimize increases in premiums and as insurers pull out of markets or decline to renew policies, based on prudent business discipline. Some of the recommendations would cost more than consumers or taxpayers are willing to pay. But at least we could help consumers and governments understand the realities that we're all facing because of climate change and could help define a realistic future. 

Cheers,

Paul

 

A Secret Weapon Against Claims Inflation

An active, efficient accident management program can save hundreds of dollars per claim and potentially cut days off a claim’s cycle time.

A black car and a silver car on the road with the background blurred

KEY TAKEAWAY:

--An accident management expert can help expedite the collision claims management process to mitigate financial impacts, such as storage fees, secondary tows, rental costs and more.

----------

Managing the costs associated with an accident claim has never been more important, considering today’s challenging economic climate. Amid inflation, supply-chain disruptions and a labor shortage, auto insurers also face pressure from surging storage costs and an increasingly complex process of matching vehicles needing repair with facilities that have both the capacity and the capability to fix them. These challenges result in higher costs for insurance carriers and lengthier wait times for policyholders. 

The unfortunate truth is that accident frequency and severity have been increasing over the past several years, driving up loss costs. The cost impact from this trend is even greater when policyholders delay reporting the loss to their carrier, and only 9% to 13% of policyholders report first notice of loss (FNOL) from the accident scene, according to Agero.

While some costs are unavoidable, ensuring that carriers have an active and efficient accident management program can help them save hundreds of dollars per claim and potentially cut days off a claim’s cycle time. Leveraging the assistance of specialized accident management experts can provide insight into how to improve efficiency and performance to reduce (or even eliminate) costs coming down the pike.

Let’s take a look at how an accident management expert can help improve claims management in the face of myriad industry challenges.

Navigating pitfalls and macro challenges 

First and foremost, an accident management expert can help expedite the collision claims management process to mitigate financial impacts, such as storage fees, secondary tows, rental costs and more. To lessen these impacts, an accident management expert focuses on the following three areas: 

1. Recovering the vehicle from the accident scene

Recovering the vehicle from the accident scene is critical to minimizing loss costs and expediting claims cycle times. However, reporting an accident to an insurer from the scene can be incredibly difficult for policyholders. The time immediately following an accident is particularly challenging, as drivers may have to manage multiple high-stress situations simultaneously. These include assessing damage to their vehicle and any other vehicles involved, triaging potential injuries and navigating what might be a dangerous situation. Given the potentially chaotic nature of the moments following accidents, it is not surprising that most drivers fail to notify their insurance carriers while still at the accident scene. The delay costs their carriers an extra $800 to $1,025, on average, per claim, according to Agero’s analysis of secondary tow costs from the first half of 2023.  

See also: 5 Ways Generative AI Will Transform Claims

2. Mitigating downstream costs

Unfortunately, capturing vehicles at the accident scene is not always an option, despite best efforts. As a result, insurers and policyholders assume the additional costs of storage and secondary tows that can increase claims by hundreds of dollars above the cost of simply performing one tow from the accident scene to the desired repair shop or salvage yard. Mitigating these downstream costs is where an accident management expert can make a meaningful difference.

An expert can recommend steps to streamline the FNOL process, such as by identifying and recommending a digital FNOL option for immediate accident reporting. Digital solutions can benefit both the insurer and the policyholder. For instance, mobile telematics can automatically detect and report a crash, saving insurers hundreds of dollars in loss costs and helping policyholders report their losses from the accident scene.

An accident management expert will manage their own curated network of towers to deliver quality service at a reasonable cost, especially when compared with inflated retail or police tow rates. As a bonus, an accident management expert can provide real-time updates so the policyholder isn’t left in the dark about when the tow truck will arrive.

3. Combatting body shop refusals

Body shop refusals have been a growing problem since the start of the pandemic, increasing by 187% in 2022 over the prior year, and are on track to increase an additional 34% in 2023 per Agero research. These refusals have resulted in additional tows because, once a vehicle is towed to a shop and refused, it must then be towed back to storage until another shop has availability. Each refusal results in additional tows back to storage and to the new repair destination. This leads to extended storage time and costs and increases the time that a policyholder is without their vehicle. 

An accident management expert can provide critical relief by working with insurers on how they interoperate with their body shops to mitigate these challenges and break the cycle of refusals. For instance, an accident management expert can help insurers analyze body shop refusal rates, identify regional trends and potentially manage direct repair program shops that are refusing vehicles in violation of their carrier agreements.

Findings may show that a few pre-dispatch confirmations by an insurance associate can help avoid a refusal. This information can inform best practices for confirming a shop’s capacity and determining which jobs they’re capable of repairing. As a result, agents can select the best body shop for the job and reduce the chances of a vehicle being refused, resulting in quicker repairs for the policyholder. 

See also: Insurtech Is at an Inflection Point

There’s never been a better time to partner with an accident management expert

An accident management expert can serve as a vital resource by helping to streamline the process, identify unknowns (such as the impact of rising primary tow rates) and foster collaboration among all parties.

The confluence of industry challenges from inflation and rising costs, labor and parts shortages and increased volume of accidents makes it incredibly difficult for insurers to independently address the impact these issues have on collision claims. However, by doing their part to keep costs down and wait times low, an accident management expert can help make these events more seamless for all parties involved.


Ben Zatlin

Profile picture for user BenZatlin

Ben Zatlin

Ben Zatlin serves as vice president and general manager of Agero's accident management business, a role he began in September 2021.

Previously, Zatlin led Agero's digital transformation to Swoop, its next-generation dispatching platform. Prior to Agero, Zatlin was a management consultant at professional services firm Deloitte and an operations engineer at life sciences company Abbott Laboratories.

He holds a bachelor’s in biomedical engineering from University of Southern California and an MBA from Harvard Business School.

A Rush of Cyber Attacks in Australia

The attacks show how the nation is broadly unprepared -- but also how a series of tools could have prevented or at least mitigated the attacks. 

A large green computer chip against a black background

Over the past year, Australia has been the target of numerous successful cyber-attacks. These attacks have affected a significant percentage of the country’s population of 24 million people -- with some individuals affected in multiple breaches. According to the Australian Cyber Security Centre’s Annual Cyber Threat Report 2021-2022, there were a staggering 76,000 cybercrime reports from July 2021 to June 2022 -- a 13% increase from the previous financial year.

In September 2022, Australian telecommunications giant Optus was hit in one of the largest data breaches in Australian history. The Optus attack constituted the first incident in a series of devastating, large-scale cyber attacks that exposed significant flaws in Australia’s national cyber resilience. 

On Sept. 23, 2022, Optus released a statement on its website and social media confirming a “significant” cyberattack against their systems. Personally Identifiable Information attributed to approximately 10 million current and former Optus customers -- around 40% of Australia's population -- was compromised, including names, birth dates, home addresses, phone numbers, emails, passport numbers and driving license numbers. 

The breach sent shockwaves through the nation, and the circumstances surrounding it quickly became a subject of debate. A company insider claimed human error had accidently exposed their application programming interface (API) on a test network, providing the entry point that caused the attack. Optus rejected this claim, asserting that a highly complex and sophisticated attack had occurred, where the attacker used advanced techniques to scrape a portion of the company’s consumer database, leaving open questions about the motives and depth of the breach. On Oct. 6, the Australian federal government announced the implementation of an emergency regulation that would allow Optus to share customer information to banks and government agencies to detect and prevent identity fraud in the aftermath of the attack. 

With headlines surrounding the Optus attack still dominating the Australian news cycle, days later, on Oct. 13, another public statement regarding a second potential cyber attack shocked the nation once again. Medibank, Australia’s largest private health insurance provider, alerted the Australian Securities Exchange (ASX) that it had detected “unusual activity” on its networks, emphasizing that there was no evidence that sensitive data, including customer information, had been compromised. Medibank retracted this claim one week later, confirming that customer data had indeed been compromised in the attack.

See also: Say Goodbye to Cyber's 'Dating Profile'

On Oct. 26, Medibank revealed the scope of the customer data compromised, admitting that hackers had full access to three primary customer data categories -- AHM customer data, international customer data and Medibank customer data.‍ On Nov. 7, Medibank announced that 9.7 million customers were likely to be affected. Customers were informed that Medibank would not be paying the USD$10 million ransom payment, despite the hackers’ threats to publish the stolen data on the dark web. The investigations that followed the breach revealed that Medibank’s systems had been accessed as a result of a compromised login credential (user and password) used by an unnamed third-party IT services provider.

In March of this year, malicious actors once again leveraged compromised credentials from a third-party vendor to breach the  systems of Latitude, an Australian financial services provider. Data from 14 million customers in Australia and New Zealand was stolen. Again, this data included names, addresses, emails, phone numbers, birth dates, driver’s license numbers and passport numbers. Some dated back to 2005, drawing scrutiny over why the company kept customer records beyond the required seven years. The company is now under investigation to determine if it took sufficient measures to prevent the attack, and there is a class-action lawsuit against Latitude for its failure to protect customer data.

The Latitude breach was swiftly followed by another attack. This time, HWL Ebsworth was the victim. In April, Russian-linked ransomware gang Alphv attacked the major Australian law firm, publishing (1.1 terabytes of a total 3.5 terabytes stolen from HWL Ebsworth’s systems) on its dark web dedicated leak site. At least four Australian banks were implicated by the breach -- with Westpac, NAB, the Commonwealth Bank and ANZ among the many public and private sector entities that may have had data stolen. Further, an estimated 60 departments or government agencies have used HWL Ebsworth’s services, including the Defence Department, Home Affairs, the Australian federal police, prime minister and cabinet, Services Australia and the Fair Work ombudsman.

Many other attacks have made the headlines, targeting schools and universities, hospitals and healthcare providers, government entities (including the Tasmanian government) and more. This series of large-scale attacks has led to sharp criticism of Australian government officials for their lack of cohesive cybersecurity policy. As a result, Australian Cyber Security Minister Clare O’Neil made public admissions that Australia had been in “a cyber slumber,” falling at least five years behind other developed nations regarding cybersecurity and data privacy. O’Neil, who is overseeing the overhaul of the national cybersecurity strategy, said the high-profile Optus, Medibank, Latitude and HWL data breaches are only the “tip of the iceberg” of the cyber threats facing Australia. She has invited Australians to join the “whole-of-nation effort” to bolster the country’s cyber resilience. 

Potential Causes of Concern

Several factors have been cited as contributing to Australia's relative cyber unreadiness compared with other countries.

  1. Lack of appropriate regulations and mandatory cybersecurity standards for companies holding large amounts of personal data. Unlike Europe, Australia has no overarching data protection or privacy laws with strict security and breach response requirements. The existing regulations set minimum standards that companies can meet without necessarily achieving strong security. This allows some organizations to underinvest in their cybersecurity programs and infrastructure. There are also no cybersecurity licensing requirements or mandatory external assessments of controls to encourage best practice. Furthermore, the enforcement of existing frameworks, such as the Notifiable Data Breaches (NDB) scheme, is perceived as lax, with few consequences for noncompliance, while critical infrastructure operators face limited oversight and have discretion over how they meet security obligations. Experts argue that prescriptive security-focused laws, properly enforced through auditing and penalties, are urgently needed to lift industry standards across the board in Australia. 
  2. Underinvestment in cyber defenses. Budgets allocated to cybersecurity programs by both government agencies and private organizations have fallen short of what experts recommend based on evolving threats and expanding attack surfaces. This underfunding has resulted in insufficient resources dedicated to basic but critical defensive controls like encryption, multi-factor authentication, regular security testing, patching and logging/monitoring. Australia has struggled to meet its own cybersecurity strategy target of investing 2% of GDP in cyber defenses due to inadequate budget appropriations over time. This target itself is also considered insufficient, as comparable nations spend significantly more. This wide-scale underinvestment has created shortages in defensive capabilities that are ripe for adversarial exploitation.
  3. Shortage of cybersecurity skills and talent. Despite the rapid escalation of global cyber threats, Australia has failed to produce enough skilled professionals to match the growing demand across both government and private sector organizations. Cybersecurity occupations are consistently listed in national skilled occupation shortages, yet efforts to boost the talent pool through education and training have been insufficient. Those universities and vocational programs that do offer cyber courses struggle to attract students due to a lack of industry engagement and the perceptions of limited career opportunities in Australia. Immigration pathways for global talent have also been limited, preventing firms and agencies from easily supplementing the domestic cyber workforce. 
  4. Widespread use of outdated legacy IT systems. Many large organizations, and government agencies in particular, still rely on digital infrastructures and systems that are decades old, using obsolete software and technologies no longer supported by vendors. These legacy architectures were not built with security as a primary consideration, relying on outdated protocols and lacking basic security controls. Upgrading such sprawling legacy estates is an immense logistical and budgetary challenge for organizations, due to the complex interfacing of old and new. Delaying these upgrades, however, leaves serious security vulnerabilities and exposures that attackers can readily exploit through unpatched backdoors.
  5. A misplaced focus on data sovereignty. Australia’s focus on data localization (requiring data to be stored in Australia) has discouraged offshore cloud adoption where security is generally stronger. These local data storage requirements have placed significant cost burdens on enterprises, taking funding away from cybersecurity programs and skills development. In reality, the most significant attacks typically target people/processes rather than infrastructure or location. Accordingly, these overly protectionist policies provided a false sense of security while slowing digital transformation, leaving some organizations with outdated legacy systems that are hard to defend. In today's connected digital ecosystems, where organizations increasingly leverage multiple cloud platforms for flexibility and resilience, true data sovereignty is impossible. Rather than mandating unachievable data storage models, priority should be placed on establishing robust encryption, access controls and response obligations wherever Australian data is accessed or processed.

Earlier this year, O’Neil stated that Australia must prepare for a “dystopian future” in which increasingly digitally connected cities may be “held hostage through interference in everything from traffic lights to surgery schedules.” When addressing the Sydney Dialogue conference in April 2023, she said that Australia “faced a scale and intensity in the threat landscape that far outstrips the recent cases we have seen.” 

O’Neil called out state-sponsored attackers, financially motivated cyber actors and extortionists as public enemy number one. To combat these nefarious groups and individuals, she put together a new cyber strategy, including a series of national exercises focused on protecting critical infrastructure, and aims to make Australia “the world’s most cyber-secure country by 2030.” 

Boosting Australian Cyber Resilience With Cyber Threat Intelligence Solutions 

A crucial part of O’Neil’s strategy is building a team of 100 cybersecurity specialists who will be “permanently focused on hunting down people seeking to hack our systems, and hacking back.” As with any organization’s threat-hunting efforts, rich cyber threat intelligence (CTI) that sheds light on threat actors’ activities and targets, as revealed on millions of deep and dark web sites and forums, will be paramount to Australia’s threat-hunting mission. Armed with such intelligence, the Australian government and business community can understand threat actors’ tactics, techniques and procedures (TTPs) and benefit from early warnings regarding the very first indications of potential risk -- before an attack materializes. By monitoring their attack surface and preemptively implementing necessary defensive measures to block cybercriminal efforts, Australian companies will be better equipped to manage and reduce their overall organizational threat exposure and protect their systems from attack.

How these high-profile attacks could have been prevented

Optus: Although the cause of the attack remains disputed, for this purpose, we will examine the incident based on the assumption that an unsecured API was the source of the breach. In this case, a solution such as External Attack Surface Management (EASM) could have helped detect and mitigate this exposure before it was weaponized.

EASM solutions work to continuously discover an organization's digital assets and footprint across the external attack surface on various surfaces, such as public IP addresses, domains and APIs. EASM involves performing scans from an external perspective to understand how attackers view and potentially access your systems through exposed external assets connected to the organizational network.

Had Optus implemented EASM:

  • The API exposed to the open internet would have been discovered during external scans.
  • Its configuration without proper authentication or encryption would have been identified as a security weakness ripe for cybercriminal exploitation.
  • Optus could then have corrected the issue by reconfiguring the API with valid credentials or HTTPS to reduce the attack surface.
  • EASM monitoring would ensure any new APIs deployed externally were also appropriately protected.
  • Valuable metadata about Optus' digital properties and dependencies would be collected, helping to discover additional high-risk vulnerabilities and exposures.

By knowing their external attack surface and identifying misconfigurations, EASM gives organizations visibility to gaps that threat actors could exploit from the internet before evasive attacks occur. This could have helped Optus avoid such a significant breach.

Medibank: The Medibank breach was the result of compromised credentials used by a trusted third-party IT services provider. Real-time cyber threat intelligence from the deep and dark web could have helped to identify this exposure and prevent the attack.

  • Initial access brokers actively trade stolen access credentials (usernames and passwords, remote desktop protocol access, etc.) on dedicated deep and dark web forums and markets.
  • Real-time deep and dark web cyber threat intelligence continuously monitors these underground platforms to identify compromised credentials the moment they are listed for sale.
  • Had Medibank harnessed cyber threat intelligence from initial access broker markets, it likely would have detected the third party's admin credentials being leaked/sold soon after theft occurred.
  • Most initial access trading happens within days or weeks of a breach. Faster detection is possible through combining Attack Surface Management solutions with CTI to receive immediate alerts of potentially compromised organizational access.
  • Once alerted, Medibank could have rapidly contacted the third party to validate, check login logs, reset credentials and reduce organizational exposure.
  • With the admin credentials changed before the attacker could purchase, leverage and weaponize the compromised access, data exfiltration may have been stopped or limited.

Early warnings of credential compromise through deep and dark web monitoring of organizational assets provides a critical window to contain breaches before significant damage. By monitoring the organizational attacks surface in real time across the deep and dark web -- in particular, across initial access broker marketplaces -- Medibank may have been able to detect this exposure and prevent its weaponization before cybercriminals were able to exfiltrate sensitive data belonging to approximately 10 million Medibank customers.  

Latitude: The source of the Latitude payroll data breach has not yet been confirmed publicly. While official investigations continue, cybersecurity experts analyzing the case reportedly believe the attacker(s) gained initial access either through credential theft via a phishing attack targeting Latitude employees or by exploiting an unpatched vulnerability in an internet-facing Latitude application or service. If this were a case of compromised credentials, the steps Medibank could have taken would also apply here. If the cause of the breach was through the exploitation of an unpatched vulnerability, vulnerability exploit intelligence would likely have equipped Latitude with the necessary insight to prioritize treatment before the exposure had been weaponized in attack.  

Had Latitude implemented vulnerability exploit intelligence:

  • Continuous scoping and discovery of their organizational attack surface, coupled with CPE-CVE matching, would have alerted Latitude to an unpatched, exposed vulnerability within their asset inventory.
  • Effective vulnerability exploit intelligence would then have helped determine the real-time risk of exploitation, considering critical factors such as the availability of exploit kits and POCs, instances of exploitation in the wild and heightened cybercriminal discussions surrounding the vulnerability.
  • With insight into cybercriminal discourse and activity across the deep, dark and clear web, and a real-time understanding of the likelihood of exploitation, Latitude would have been equipped with the early warning they needed to recognize this as an urgent, high-risk threat to their organization.
  • This preemptive intelligence would have allowed Latitude to accurately prioritize treatment, immediately patching the vulnerability or isolating the unpatched asset to mitigate the damage of exploitation before the vulnerability had been weaponized in attack.

Armed with comprehensive visibility into their organizational threat exposure, Latitude could have likely uncovered and addressed the vulnerability much sooner -- before data theft occurred.

HWL Ebsworth: The cause of the HWL Ebsworth data breach has not yet been officially confirmed publicly. However, the usual modus operandi of notorious ransomware gang Alphv, which claimed responsibility for the attack and leaked data from it, suggests that Alphv infiltrated the law firm's network via a targeted phishing email campaign. Alphv is known to use personalized phishing lures containing malware payloads disguised as legitimate files or links. The goal of these phishing emails is to install info-stealing malware on corporate devices to extract login credentials and other initial access vectors -- similar to the Medibank case.

See also: Partners and Cyber: To Affinity and Beyond!

As discussed, cyber threat intelligence can detect stolen corporate credentials offered for sale on initial access broker sites, providing early warning of exposure before the access vector is purchased and weaponized. Cyber threat intelligence can also help organizations preemptively block info-stealing malware before it has infected a corporate endpoint and compromised access to the network. 

  • Initial access broker listings typically note the stealer that was used to compromise the machine. Continuous monitoring of these and other deep and dark web sources can provide critical insight into the indicators of compromise (IOCs) associated with credential theft malware.
  • By integrating real-time, context-rich IOC intel into their security tools, HWL Ebsworth could have preemptively blocked indicators associated with known access compromise threats at the network/endpoint level before user exposure via phishing lures.
  • Intelligence on keyloggers, info-stealers, remote access Trojans and other post-intrusion tools advertised for sale on the cybercriminal underground -- including contextual attributes such as source, threat actor, malware family and confidence score -- delivers critical insight into attacker techniques to identify blind spots and harden the attack surface before exploitation.

Timely integration of contextual indicators into HWL Ebsworth's security infrastructure would have blocked these access vectors preemptively at network and device level.

Timely integration of a comprehensive and continuously updated feed of indicators of compromise from both open and underground sources into HWL Ebsworth's security infrastructure would have enabled the firm to preemptively block known access compromise threats, denying the vectors before phishing exposure.

Conclusion

The series of high-profile cyber attacks over the past year have shaken confidence in Australia's cyber resilience, highlighting the need to reassess security strategies across all sectors. However, they have also provided important lessons for improvement. 

Moving forward, Australia must reevaluate the outdated focus on data sovereignty, recognizing the borderless nature of the cyber threat landscape. A comprehensive, nationwide cybersecurity strategy that embraces innovation is critical, and a paradigm shift in the way that Australia conceptualizes cybersecurity is central to success. Taking inspiration from her allies in the U.S., Australia must mandate minimum security standards for companies and critical infrastructure, regularly assess compliance and strictly enforce consequences for breaches. Cybersecurity budgets must be significantly boosted to address workforce gaps and equip security teams with the tools they need to defend their systems in the face of increasingly sophisticated cybercriminals. Cyber threat intelligence and attack surface management solutions should be adopted to preemptively hunt down threats and identify weaknesses before they are exploited.

Equipped with insight into the epicenter of cybercriminal activities and discourse, security teams can confidently bolster their defenses based on a real-time understanding of threat actors, their tactics, tools, techniques and procedures and likely vectors for attack. With the right skills, resources and oversight in place, Australian businesses and government entities can substantially reduce their risk of becoming the next headline cyber incident. Most importantly, they will be better able to safeguard Australians' personal data and digital security.

By learning from these events and taking a preemptive, intelligence-led approach, Australia has a chance to emerge stronger. Now is the time for decisive action that constructs a robust security architecture for the country -- one that can withstand the cyber challenges of tomorrow.


Delilah Schwartz

Profile picture for user DelilahSchwartz

Delilah Schwartz

Delilah Schwartz is Cybersixgill's cybersecurity strategist.

She boasts expertise in the fields of extremism, internet-enabled radicalization and the cybercriminal underground.

Innovative In-House Legal Software

Modern legal software can identify risks, categorize them based on severity and relevance and propose tailored mitigation strategies.

A desktop computer with code on it and on top of a white desk with a keyboard and pens

In-house legal software is blending cutting-edge technology with legal expertise to solve complex challenges.

The legal focus has shifted over the years, from intellectual property protection and contract drafting to data privacy, cybersecurity and fintech regulations. Risk management methods need to adapt to these changes, and our focus needs to shift from mere mitigation to strategic anticipation and navigation.

While traditional risk management methods remain relevant for issues like operational or financial risks, the legal dimension requires a more preemptive and tailored approach — and modern legal software offers solutions that are not just reactive but anticipatory.

Key Developments in Legal Software

The landscape of legal technology has undergone a seismic shift, particularly in the last year. This transformation can be attributed to several key developments that have elevated legal software from a mere tool to an indispensable asset for risk management.

  • Generative AI has become a cornerstone of modern legal technology. It goes beyond mere automation, from automating routine administrative tasks to sophisticated data and trend analysis. AI reduces the risk of human error by automating labor-intensive processes such as contract management. This allows legal professionals to focus their expertise on more complex, high-stakes tasks, effectively shifting the paradigm from reactive to proactive risk management.
  • Modern knowledge management (KM) software aggregates information from various departments into a single, searchable database. This collective expertise not only enhances the firm's capabilities but also provides a robust foundation for risk assessment and mitigation strategies. Leveraging the aggregated data requires serious software muscle power.
  • As mergers become more frequent, specialized technologies have emerged to manage the intricate and time-consuming process of data management during these mergers.
  • Enterprise risk management has expanded its horizons to include not just financial governance but also security, IT, third-party relationships and governance, risk and compliance (GRC) procedures. Modern legal software is increasingly becoming part of a broader technology stack that manages a diverse array of risks. 
  • The technological revolution in the legal sector has given rise to new roles such as legal technologists, legal process analysts and legal data scientists. Traditional law firms are finding their existing structures increasingly obsolete and are having to redefine their services and integrate technological capabilities.

See also: Cybersecurity Standards for Insureds Are a Must

The Power of In-house Legal Software in Risk Management

Traditional risk management tools often rely on manual inputs, static databases and lengthy processes. In contrast, in-house legal software offers automation, real-time data analysis and adaptive algorithms. 

By leveraging AI and machine learning capabilities, in-house legal software can identify risks, categorize them based on severity and relevance and propose tailored mitigation strategies. 

What does this mean for legal teams? Quicker response times, better overview of potential threats and more time to focus on strategic decision-making rather than sifting through data.

Imagine a global corporation with complex contractual obligations spanning numerous jurisdictions. Staying compliant and tracking liabilities would be time-consuming and prone to error. However, with advanced in-house legal software, not only could they maintain real-time oversight, but they could also receive alerts for potential compliance breaches and devise a proper response.

Or imagine a startup navigating the intricate web of intellectual property rights. It could use legal software to automatically scan for potential infringements of intellectual property, ensuring the startup remains on solid legal ground as it grows. 

Leveraging In-house Legal Software for Insurance

The integration of in-house legal software in the complex ecosystem of insurance serves as a potent catalyst for savings and precision in many different tasks:

  • It transforms insurance policy reviews: Historically, insurance policy reviews have been exhaustive, requiring meticulous examination of documents and terms. Today, companies can use legal software to automatically scan policies and flag discrepancies, outdated terms or potential compliance issues. This leads to quicker reviews and assurance of policy accuracy.
  • It enhances compliance policies: By constantly updating their database with the latest regulations, modern legal solutions ensure that insurers remain compliant at all times. They can alert teams of impending regulatory changes, enabling them to adapt in advance.
  • It revolutionizes contract management: Legal software aids in organizing, tracking and analyzing different contracts. Automation can identify unfavorable terms, potential areas of dispute or clauses that are misaligned with a company's standard practices.
  • It elevates insurance transactions: Compared with traditional methods, in-house legal software enhances the efficiency and reliability of insurance transactions. From automated underwriting processes to real-time claims assessments, these tools cut down the duration of transaction cycles and minimize human error.

Legal automation saves time and money and leads to more accuracy in all of your processes.

Overcoming Implementation Challenges

Adopting new tech solutions inevitably comes with a specific set of challenges.

A common obstacle is our innate resistance to change. There are many approaches to change management. What we have seen work best are workshops and case studies that illustrate the tangible benefits of the new system — with an emphasis on how these tools can complement, rather than replace, existing expertise. 

Other common issues come in the form of integration issues and data security concerns. Both can be addressed by working closely with your IT department to vet and choose a reliable legal software vendor that offers the right security certificates and a modular solution that can be tailored to fit within existing frameworks.

Getting past all of those obstacles sets you on the path to success. But you need to be able to measure that success to continue on the same path. 

This is why you need to establish clear metrics from the outset, such as response times to risks, reduction in compliance breaches or efficiency gains in legal processes. Remember to take your baseline numbers before you start implementing new workflows and digital solutions so you can calculate potential improvements. 

See also: 3 Practical Uses for AI in Risk Management

Parting thoughts

The symbiosis of risk management and innovative in-house legal software opens up a new dimension: the ability to "future-proof" our businesses. However, this only comes as a consequence of the shift in our perspective — from seeing risk as a threat to be avoided, to viewing it as a challenge that can be turned into an advantage.

After all, in an age of unpredictability, the best defense is a strategic offense.


Evan Wong

Profile picture for user EvanWong

Evan Wong

Evan Wong is the CEO and co-founder of Checkbox, a multi-award-winning no code workflow automation platform.

Wong, who is on the Forbes "30 Under 30" list, has worked with many legal teams globally on their digital transformation projects by leveraging the power of no code automation. He has helped redefine how lawyers conduct intake and triage, generate documents, provide advice and facilitate workflows.

Always Be Data Segmenting

Firms must always be segmenting and improving experiences for personalized risk transfer, safety, prevention and loss management.

Three people sitting at a table and one person standing, and they're all talking; there are papers and a laptop on the table and behind the people is a black wall with post-it notes covering it

When it comes to integrity, money and shame seem to be the motivators that matter for people and businesses. With enough greed or fear, lapses in behavior and judgment may arise. 

When it comes to data integrity, accurate and identifiable context matters most. It helps remove the risk of relying on fluctuations in behaviors and motivations. Verified is a virtue.

You might believe everything you read on the internet, HA! You might believe every promise made, HA! You might believe there are no bad uses for good data, HA! But the world of risk transfer is built on staid provenance and reliability, with a "rule of law" expectation. The only funny thing about misuse of data is that it’s not illegal until there is a law that's been broken, so trust is all we have.

In a business built on promises, only those “made and kept” create trust and value.

Saving time and money with trusted services and providers is a mutual goal for policyholders, agents and insurers. Each needs data to be exchanged and used for specific problems and use cases to achieve those goals. So eliminating any questions of integrity oils the path for friction-free trust – and governed, actionable, accurate, contextual data is that oil.

For consumers, they have a bottom line at home. They seek value for their money and appreciate good service experience with a minimum of friction and time-wasting. Above all, customers with integrity have little to hide and are ready to engage with valuable, empathetic and interesting propositions that bring peace of mind and make it easier to interact day-to-day as well as in crisis-and-recovery mode. Convenience, continuous access and customer-preference awareness are the new expectations that consumers have with the connected world around them.

See also: Don’t Neglect the Politics of Analytics

For businesses, they have a bottom line at work. They know not all customers are equal and not all cars, homes, phones, drivers, routes, territories, businesses, buildings, etc. are the same. Yet often frustratingly, businesses cannot capture the value of that knowledge. Frequently, better data becomes available, but it may be device-specific, so the cost of collecting new data may be prohibitive, or it may be talent-enabled, so a capability gap needs to be overcome. Turning data into products for scalable customer interaction and business system integration frequently takes "oil pipelines" (a.k.a. application programming interfaces, or APIs) and new ways of working. Prioritizing these and shepherding them is a continuous improvement journey.

The best-in-class businesses are run by people who are customers, too, and think in a customer-centric fashion. They always look for better data for making better decisions as trusted advisers and guard against unscrupulous exploitation, unintended consequences, disparate impact and lack of fairness even across their supply chain of vendors and networks of suppliers. They consistently audit data quality and the cost of data (paid or collected) and innovate on things like eliminating steps and tasks for things they substantively can know (e.g. pre-fill everything) while always looking for new levels of analysis and new data features to improve customer and situational understanding (e.g. recursive segmentation). This search occurs for rating and risk factors alike in a progressive fashion of rate-to-risk fit.

Insurance is a “cost plus” business, and it is mostly a compulsory product, which is a good reason for all the regulatory structure. When costs rise, consumers don’t expect service to suffer. They expect that their data should maximize their value from a company, and they want the same level of service as they get when costs flatten or go down. And they never want to feel that their trust has been mislaid. They will forget the price changes eventually, but they will remember how you made them feel about those changes. Double that for data skullduggery – blatant or inadvertent or accidental.

The interplay of customers and businesses creates a market dynamic often laden with mistrust – where what you don’t know you don’t know is a blind spot you must avoid. The other boxes in that Johari Window of shared/unshared knowledge explain the dynamic more implicitly.

Customers with motivations (fear and greed) to avoid transparency are different than those who simply don’t trust what will become of their data. The latter get information about businesses from friends, family, advertising, the internet, agents and the companies they frequent. When their "radar" creates anxiety that their money or data is being exploited, their concerns and outrage cause them to churn. So, too, when their data is NOT being used to improve their experience: Concerns and outrage cause them to churn. 

It seems that in exchange for their data, they expect it to be used as intended, for their greater good or not at all. Increasingly, this last option is dissipating as companies and products wrap “always and forever” expansive tendrils on any and all current or future "blue sky" ownership rights to everything that might be data and any inferences that those might create. This lingering infringing specter is a work in progress as is the pushback, oversight, audit, regulation and practice of law.

As for businesses, the examples of firms monetizing data in all sorts of freemium, premium and pay-per-use avenues continue to astound investors while stoking the venture forge fires. The short list of BFOs (blinding flashes of the obvious) are amazing in both the commonplace uses as well as the novel and exotic. But innovations aren’t real if no one buys them.

In a hard market, cost takeouts, "good customer" retention and focus on risk appetite get more play on the profitability jukebox than the soft market tune of "just get growth." When companies fail to understand the customer at new business, or the changes in the customer over time for renewal, they dig their own grave by assuming all customers are equal – they are not.

What happens to brand and reputation if you make promises and assurances and then abandon them? Will customers learn they cannot rely on you and your data practices? Or will you commit to a way of working to always be segmenting and improving experiences for personalized risk transfer, safety, prevention and loss management.

The cost of doing nothing with data is your future. Handling your data poorly accelerates your ex-date.

A Breakthrough for Smart Homes

An industry standard has taken hold that will let smart-home devices talk to each other seamlessly, setting the stage for a wave of innovation.

Image
Smart House on Phone

Technology standards are never sexy, but they can be awfully important. You've likely never heard of 802.11b, but it unleashed a revolution in how we and all our devices communicate when the Institute of Electrical and Electronics Engineers released it in the late 1990s. You know it as Wi-Fi.

An industry group has now rallied the major players involved in smart homes and produced a software standard that lets all their devices talk to each other as effortlessly as your computer links to the cloud. You will now be able to see and use your devices woven into a network rather than as a series of individual devices.

The real innovation begins now.  

The rollout of the standard, called Matter, has been in the works for a while. Here is me writing about it when it was announced a year and a half ago: "Smart Homes Are Finally Getting Smarter." What's new is that devices supporting the standard have been steadily introduced for almost a year and that there has been a flurry in recent weeks, suggesting that Matter has achieved liftoff. According to the Connectivity Standards Alliance, the group that developed Matter, more than 1,800 applications and devices have been tested and approved to support the standard.

A headline in the Wall Street Journal last week declared, "It's Finally Time to Add Some Smart Tech to Your Dumb Home." The article begins: "Up to now, the defining feature of most smart-home technology has been that no normal person should buy it.... But finally, we are at what promises to be a breakthrough moment."

Whether that breakthrough happens depends on the creativity of those who develop and deploy the technology -- and the insurance industry has been playing an important role because of the potential for smart homes to prevent damage from water leaks and fires and to head off burglars. 

On the carrier side, State Farm has been the most aggressive. It announced a year ago that it had purchased 15% of ADT, the home security company, for $1.2 billion and was going to put $300 million into an "opportunity fund" for further innovation.

State Farm also announced early this year that it was offering homeowner clients a free sensor called Ting, from Whisker Labs. The device plugs into a wall socket and monitors the electricity flows in the whole house or apartment, spotting anomalies that indicate danger of a fire. In the announcement with State Farm, Whisker Labs said it was mitigating or preventing 250 home fires a month -- and the rollout has just begun.

On the insurtech side, in addition to Whisker Labs, Roost offers a robust set of sensors that detect water leaks, fires and home intruders. Pepper/Notion is another leader, providing inexpensive devices about the size of hockey pucks that can be distributed through a home, apartment or building in spots where leaks are most likely to occur.

(If you're interested in more detail on what the industry is doing, check out a recent article we published, "How Smart Homes Are Changing Insurance" or the less recent but still very informative "Connected Insurance Comes of Age.")

What changes now for insurers and their customers is that everything becomes more flexible and easier with devices that are built to the new standard or that can be retrofitted with the Matter software. You could have window sensors from ADT to detect break-ins, a Ring front-door camera from Amazon, a fire sensor from Whisker Labs and a water sensor from Pepper/Notion, all integrated into a single app that monitors your home from your phone. You would no longer need the sort of monitoring panel that Roost, ADT and other home-security firms install in homes. And you won't have to worry about disabling your app if you switch from an Apple phone to an Android phone, because both will support the Matter standard. 

From that same app on your phone, you'll also be able to control other aspects of your home. You'll be able to adjust your Nest thermostat. You'll be able to remotely control lights, in case you want to make it appear that you're home even though you and your family are on vacation. You'll be able to unlock the front door if you want a neighbor to check something for you while you're gone. 

The group that developed Matter designed security into the standard from the get-go, so devices connected using it should be better protected than devices are now. That's the theory, anyway. Hackers are clever and persistent, so we'll have to see what happens.

To date, adoption of smart-home devices has been disappointing. Yes, Ring front-door cameras and Nest thermostats caught the public imagination, but that's about it for hits. Amazon thought its Echo device would make it easy for people to buy more stuff from Amazon, but it turns out that just about any purchase is more complex than, "Alexa, buy me some XYZ." Google's Home device hasn't caught on as a way to manage a home. And when Facebook tried to get users to install its cameras to enable constant video chat -- well, nobody wanted to let Mark Zuckerberg roaming around in their homes.

So there definitely needs to be some innovation if the smart-home trend is going to take off.

The good news is that, once a standard is in place, it isn't just the devices that attach to it that improve rapidly. The technology underlying the standard does, too, making it ever more robust.

802.11b was a breakthrough when it offered an inexpensive way to allow wireless communications at 11mbps (megabits per second) in the late 1990s. Some 25 years and dozens of iterations later, 802.11ax allows for speeds almost 1,000 times that fast -- and speed is just one of many vectors along which the technology has improved by orders of magnitude. 

Smart homes should only get smarter from here.

Cheers,

Paul 

A Crisis in 911 Emergency Call Centers

Every day, thousands of people are put on hold, or calls are not even answered in some cities, due to the staff shortage in call centers.

Four people in a row sitting at a desk in front of laptops with headsets on

KEY TAKEAWAY:

--Call centers for 911 emergency services are woefully underfunded and understaffed. So are emergency medical technicians (EMTs), which are not considered essential services in 39 states and thus have to be paid for through community fundraisers. The lack of funding needs to be fixed.

----------

The National Emergency Number Association (NENA) recently released a study reporting that over 80% of 911 call centers across the country are understaffed and underfunded. There are simply not enough operators and dispatchers in the 911 system.

For example, Montgomery County in Maryland, which fields 2,200 calls to 911 every day, has seen staffing levels decline 47% in the past year.

The reason revolves around long shifts, mandatory overtime, low pay and abusive behavior faced by the dispatchers.

At the same time, there is a shortage of emergency medical technicians (EMTs), especially in rural America. (See, "Fixing the EMT Crisis in Rural America.") The combination puts millions of Americans at risk during a medical emergency, especially in a potential life or death situation. The dual crises severely complicate the long-standing overcrowding and inefficient use of emergency medical services that has been wreaking havoc in emergency departments since long before the COVID-19 pandemic further overwhelmed the system. 

According to the Centers for Disease Control and Prevention’s (CDC) most recent statistics, there are 131 million emergency room visits annually in the U.S., with 18.6 million, or roughly 14% of those visits, resulting in hospital admissions. Over 28 million people are taken by ambulance service each year to the local ER, with two-thirds transported by the local fire department or municipal government. Roughly 3 million people per year who are taken by ambulance are in a critical, or even life-and-death situation. Tragically, the majority are due to auto accidents from aggressive and impaired drivers. 

See also: What Is 988? Future of Crisis Services

Our healthcare system is the best in the world, but it is simultaneously expensive, wasteful and inefficient, failing to address underlying basic barriers for access to primary healthcare. When the next trillion-dollar budget negotiations begin in Washington, instead of wasting millions on pet projects, federal funding should help state and local governments provide the necessary funding for 911 emergency call center operations around the country with highly paid career opportunities.

And how about funding and training for EMT squads around the country? How about funding for new medical research and technologies, including AI, to diagnose and treat people in emergency medical situations?

I spent my career in employee benefits consulting and have found that virtually everybody understands the need for health insurance, if not for themselves, at least for their families. However, nobody thinks they are going to get in an accident, have a medical emergency or get hurt on the job, even though the CDC reports that one in five Americans will have an emergency room visit each year.

The big picture includes helping address overcrowded ERs and the waste of critical time, money and resources. There are now non-emergency call center operations in some local areas (you dial 311) to free up 911 emergency call center time; these need to be promoted. We need to fund community health centers so people can have access to primary healthcare without going to the local emergency room. Even people with health insurance are having lengthy delays in getting primary healthcare. Recent data show the wait times for new patients is 26 days on average. OBGYN and orthopedic wait times for new patient visits are even longer. 

I am delighted to see the growth of local urgent care facilities, which played a huge role in COVID-19 vaccinations, and the growing use of telemedicine – especially given the long waits for primary care visits. But for heaven’s sake, we need to properly fund the 911 call center operations and first responders and EMT squads around the country.

Now.


Daniel Miller

Profile picture for user DanielMiller

Daniel Miller

Dan Miller is president of Daniel R. Miller, MPH Consulting. He specializes in healthcare-cost containment, absence-management best practices (STD, LTD, FMLA and workers' comp), integrated disability management and workers’ compensation managed care.

Interview with Taruja Deshmukh

Paul Carroll, Editor-in-Chief of ITL, and Taruja Deshmukh, InsurTech Solutions Manager at Conner Strong & Buckelew, discuss strategies for advancing innovation and operational efficiency.

An Interview with Taruja Deshmukh

Paul Carroll

To start out, can you tell us a bit about your role at Conner Strong & Buckelew and how it has evolved?

Taruja Deshmukh

I joined the firm about 12 years ago working on the P&C account management side. I started as an account analyst and eventually moved into an account executive role, managing a book of business. About four or five years ago, I raised my hand to get more involved in projects focusing on data and new technology platforms. It was clear to me that there was going to be a lot of opportunity in that space, especially at a brokerage.

From there, I helped to form an internal insurtech practice group. I now lead our “labs. by Conner Strong” initiative, which is focused on improving operational efficiencies, enhancing the customer experience and creating an innovative and inclusive culture. I support the firm's relationships with external partners, both in the insurtech community as well as our initiatives with BrokerTech Ventures.

Paul Carroll

Twelve years is basically a lifetime in the insurtech world. How have you seen things evolve in those 12 years in terms of the technology you're able to use and what you're able to do with it?

Taruja Deshmukh

It's definitely come a long way. Usage has accelerated rapidly.

One of the reasons we decided to help launch BrokerTech Ventures was that, when we saw new technologies rolled out, those solutions were typically geared toward insurance carriers, consumers, clients or risk managers. They weren’t designed for brokers. Over the last couple years, we've been able to participate more to actually create and help scale solutions for brokers.

Paul Carroll

Can you tell us a bit about the work the lab is doing and how you identify and support those insurtechs that are relevant to brokers?

Taruja Deshmukh

With BrokerTech Ventures, which is the first broker-centric, broker-led insurtech hub of scale, we have created an ecosystem across the country—really the globe, considering the international outreach with our Israel and Latin American partners. Within that ecosystem, we attract insurtech start-ups by providing access to forward-thinking brokers, the ability to reach POCs [proofs of concept] and pilot with different partners, and by providing hands-on mentorship. The support we provide is both financial and strategic, and the wide distribution that we represent really appeals to insurtechs as they look to scale their companies.

Outside of the BTV accelerator program, we focus heavily on collaboration and employee engagement. We're looking at what the pain points are within different departments and ways to solve them. We are continually evaluating both start-ups and established insurtechs and providing the ability for employees to test out solutions to see what’s going to actually help our operations and our clients, and we engage with our external partners in sharing experiences and feedback on what has and has not worked.

Paul Carroll

What are people finding useful? What's the secret sauce?

Taruja Deshmukh

A big focus for us has been on data. The ability to extract and aggregate data in a scalable way and reduce our reliance on manual and time-consuming processes has had a big impact on our operations. We're better able to leverage unstructured data, including PDFs and handwritten documents, to enhance our analytical capabilities. We've also used RPA [robotic process automation] technology quite a bit over the last couple of years to incorporate more efficiencies into day-to-day processes. When it comes to our clients, we've also been able to elevate their experience by doing things like implementing a digital application platform and connecting them with tech-forward risk management tools like wearable devices and sensors.

Paul Carroll

There's so much unstructured information out there. Eventually, AI will get to the point where you can get a recommendation that is much better for the underwriter or you can handle the claim or whatever. In the meantime, you can certainly amass all the information more efficiently.

Are you doing anything with ChatGPT?

Taruja Deshmukh

We're in the early experimental stages right now. There are some folks who are embracing the technology and others who are more skeptical, especially considering data security issues. But, overall, we understand the potential it has and are testing use cases using non-proprietary and non-confidential information to see how ChatGPT can help teams perform certain tasks quicker with a head start from AI. We're also looking at long-term ways that we can harness the power of generative AI to have an even bigger impact on our operations, including with training and development and accelerated data analysis.

Paul Carroll

With the lab, you find yourself collaborating with other brokerages on technology, That must be a little weird.

Taruja Deshmukh

It actually works very well. One of the main objectives in creating BrokerTech Ventures was that we can have a greater impact on the industry working together rather than any one firm working on its own. The collaboration with like-minded brokers has been invaluable. And the access to insurtechs through

the accelerator program has definitely helped us with our innovation objectives. The partners in BTV have been very open. Many of us are trying to solve the same pain points, and it's allowed us to leverage the collective innovative thinking of brokers and carriers from across the country.

Paul Carroll

You're one of the people who deliberately went into insurance, I talk to so many people who say they just fell into it as a career, but you actually studied it at Temple. Could you ever have imagined yourself evolving into this sort of role? Tell me a little bit about that journey.

Taruja Deshmukh

No, I would not have pictured myself where I am today. When I decided to major in risk management and insurance, my main objective was just to get a job when I graduated. I started college in 2007, and then in 2008 the economic crisis happened. Seeing all of those people struggling to find jobs scared me a little. I was in the business school trying to figure out what to major in when I happened upon risk management. I liked the material that we were studying, and Temple University has a really great risk program that had some unheard-of, like 90-something percent, job placement rate upon graduation.

Paul Carroll

People talk a lot about the war for talent, and it seems to me that when you start talking about doing the kinds of things you're doing, that might be a way to get people more revved up than they would have been about the industry.

Taruja Deshmukh

I 100% believe that. I think embracing technology and innovation is critical when it comes to attracting and retaining the next generation of talent. Recruiting for the insurance industry is already a challenge. Not many individuals even know about insurance and all of the opportunities. And once we bring people in, we can't undermine our efforts by not giving them the tools and technology they need to excel in their roles.

Paul Carroll

If we have a conversation five years from today, what accomplishments will you tell me about based on the kind of work you're doing now?

Taruja Deshmukh

A longer-term goal is to change the mindset of the way we're doing business to be a much more data-driven process. That's a challenge. But data is key.

Today, we rely quite a bit on disparate, legacy systems. Ideally, I would like to see a more advanced foundation, where our data is connected, visible and easily accessible, which creates a better framework for standardization, increased automation and the reduction of time-consuming manual work.

If we can do that, then people can instead spend more time focusing on the important things: talking to clients, digging into exposures, building solutions and improving program design.

Paul Carroll

Thanks, Taruja.

 

About Taruja Deshmukh

Taruja Deshmukh Headshot

Taruja Deshmukh is the InsurTech Solutions Manager for Conner Strong & Buckelew. In this role, Deshmukh leads labs. BY CONNER STRONG, the firm’s centralized innovation hub, focused on improving operational efficiencies, enhancing the customer experience, creating an innovation and inclusive culture, and driving innovation forward in the industry. Deshmukh manages various projects designed to explore, evaluate, and implement insurtech solutions to achieve the firm’s innovation goals through collaboration with leadership across all departments and active engagement with employees. Deshmukh also supports relationships with external partners in the insurtech community, as well as the company’s initiatives within BrokerTech Ventures, the first broker-centric, broker-led insurtech platform.

Deshmukh joined Conner Strong & Buckelew in June 2011, after graduating Temple University with her undergraduate degree in Risk Management & Insurance and International Business. While at Temple University, Deshmukh was actively involved in the Risk Management & Insurance program and completed internships with Travelers Insurance and Marsh USA. Most recently, Deshmukh earned her AIDA (Associate in Insurance Data Analytics) designation, which focuses on the conception and application of utilizing big data in the insurance industry and adopting innovation into the company culture.


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Nobody Is as Smart as Everybody

Agent and Brokers Commentary: September 2023

Purple Brain

Some years ago, a friend and former colleague wrote a book about pioneering companies that mentioned a bakery franchise that had the motto, "Nobody is as smart as everybody." The company's bakeries around the country put that motto into practice by not only sharing recipes but also by sharing insights on, say, how rainy weather affected walk-in traffic. 

I've stolen... er, borrowed... er, cited... that idea frequently. In fact, it's the organizing principle at Insurance Thought Leadership, where we provide a platform for the best thinkers with the best ideas on innovation in risk management and insurance. I know a lot of smart people, but -- repeat after me -- nobody is as smart as everybody.

That motto is the background for my interview this month with Taruja Deshmukh, insurtech solutions manager at Conner Strong & Buckelew. It is a superregional broker based in Camden, NJ, with offices up and down the East Coast, with about $175 million in annual revenue and with aggressive goals for growth. 

As she explains, Conner Strong has become a leader in BrokerTech Ventures, which has pulled together roughly a dozen brokers and a dozen carriers to take advantage of a variety of technology innovations and help brokers address a variety of pain points, especially those that require so much manual, time-consuming work. 

I think you'll find her approach illuminating.

Nobody is as smart as everybody.

Cheers,
Paul


4 PRINCIPLES OF SUSTAINABLE SELLING

Sustainable business is higher-quality business. It’s achieved by getting clients to not just buy but to also “buy-in.”

THE MGA MARKET BOOM

While MGAs continue to expand and add foundational channels, there is an interesting shift in their approach to insurtech.

EXPLORING THE DUAL ADVANTAGES OF SURETY BONDS

Some insurance professionals mistakenly think providing surety services is neither a competitive advantage nor necessary for their business.

TOP 10 CHALLENGES FOR INSURERS

From emerging technologies to changing consumer expectations, insurers are facing a complex landscape that demands their attention.


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.