Download

5 Operational Shifts for Scaling Insurance AI

Insurance AI is shifting from the wow factor of innovation to the how factor of sustaining automation at scale.

Human Responsibility for AI

AI is moving well beyond experimentation and into everyday insurance operations. As this happens, the wow factor of introducing new forms of automation to insurance use cases is giving way to the how factor of sustaining these innovations at scale. Once AI influences underwriting decisions and claims outcomes in a heavily regulated environment, success depends far less on the sophistication of models and far more on the operational systems that support them.

Earlier phases of AI adoption proved that insurers can deploy advanced models. The priority now is to embed those models into the deeply regulated, process-driven realities of underwriting, claims, and distribution without creating new friction or risk. All this must happen while taking into account what may be an outdated back office tech stack, and with a level of integration that doesn't create the next issue on the horizon of agent sprawl. Here are five operational trends that are emerging as the differentiators between AI programs that compound value over time and those that stall under complexity:

Treat document intelligence as foundational infrastructure, not a point solution

Document intelligence is a prime focus for AI modernization, yet many organizations still approach it as a tactical automation limited to intake. At scale, this narrow view leaves significant value unrealized. Documents and work items remain central to underwriting, claims adjudication, and compliance. Manual handling introduces delay, inconsistency, and risk at every handoff. As AI adoption matures, document intelligence and rigorous contextualization functions should exist as shared operational infrastructure embedded directly into workflows, rather than bolted on at the edges. This shift reduces cycle times, improves data quality, and strengthens auditability; and it further informs future agentic capabilities stemming from those same work items. That's why insurers that move fastest stop treating document intelligence as an isolated capability and start treating it as a prerequisite for operational scale.

Make AI governance an enterprise operating model

As AI becomes embedded in decision-making, the ability to maintain explainability, accountability, and auditability of AI systems must be designed into processes from the outset, not retrofitted after systems are already in production. At scale, this allows insurers to deploy AI confidently across regions, lines of business, and regulatory regimes without fragmenting their operating model. This enterprise-wide discipline of clear ownership, transparent decision logic, and consistent oversight of machine processes helps position AI governance as a C-suite priority that strengthens risk posture, customer trust, and long-term resilience.

Keep humans in the loop strategically

When human involvement is applied too broadly, productivity gains erode and trust in automation declines. Human-in-the-loop AI is most effective when experienced underwriters or claims professionals are only pulled into cases where their judgment, oversight, and exception handling add the most value in assessing complex risks, edge cases, and decisions with material financial or regulatory impact. Emerging governance models increasingly reinforce this principle. For instance, Singapore's IMDA Model AI Governance Framework on agentic systems describes a spectrum of oversight that includes human-in-the-loop, on-the-loop, and over-the-loop to help selectively scale automation while preserving accountability and control.

Connect underwriting and claims workflows end-to-end

Siloed workflows are increasingly untenable as customer expectations rise and loss events grow more complex and costly. End-to-end visibility from first notice of loss through settlement, or from submission through bind, enables AI to coordinate decisions across the full lifecycle, rather than optimizing individual steps in isolation. This coordination reduces cycle times, improves broker/agent/customer experience, and strengthens risk selection and pricing accuracy. It also provides the transparency needed to support governance, oversight, and continuous improvement. AI delivers its greatest operational value when it serves as a connective layer across workflows, aligning data, decisions, and actions inside of a process.

Modernize legacy integrations iteratively

Best-in-class agents and tools cannot operate in a silo and must take into consideration the complex legacy systems that remain a reality for most insurers. Because large-scale replacements often span multiple years, waiting for perfect conditions before deploying AI is rarely viable; yet fragmented pilots that never scale introduce their own risks. Insurers that maximize their AI investments at scale focus on incremental modernizations that deliver early operational value while progressively addressing data and system complexity. This approach avoids the trap of pilots that prove concepts yet fail to translate into production impact with quantifiable benefit. By modernizing iteratively, insurers can improve workflows, connect disparate systems, and strengthen data foundations without discarding prior investments.

Conclusion

As AI becomes embedded in core insurance operations, the conversation is shifting from capability to durability. Most insurers now understand what AI can do. The more consequential question is whether it can be integrated into underwriting, claims, and compliance in ways that improve performance without eroding trust, operational integrity, or compliance. As such, sustaining AI at scale is a matter of organization-wide discipline. It requires aligning automation with real insurance cycles, protecting scarce expert judgment, and ensuring transparency as non-deterministic agentic-driven decisions expand. Insurers that approach AI through this lens position themselves not just to automate faster, but to operate smarter, more resiliently, and with greater confidence in the outcomes their systems produce.


Jake Sloan

Profile picture for user JakeSloan

Jake Sloan

Jake Sloan is vice president, global insurance, at Appian

He has held senior operations roles with Farmers Insurance, including front-line insurance/licensed field operations, and served as CIO of Aon National Flood Services. 

Sloan volunteers as a mentor to the Global Insurance Accelerator, holds an MBA from Baker University and is a graduate of the Advanced Management Program (AMP) of Harvard Business School.

2026 Commercial Market Outlook

Prepare for Renewals and Manage Costs in a Changing Market

wavy

After years of disruption, the commercial insurance market is showing signs of moderation—but risks remain. Catastrophe losses, social inflation, and regulatory scrutiny continue to challenge organizations.

Zywave’s 2026 Outlook breaks down what insurance professionals and business leaders need to know to prepare for renewals, manage costs, and position programs for success.

Key Takeaways for 2026
  • Property Insurance: After years of a hard market, property insurance is stabilizing thanks to improved capacity and reinsurance strength. However, catastrophe losses, valuation scrutiny, and climate risks continue to challenge underwriting. Parametric solutions and resilience measures are gaining traction—organizations with accurate valuations and proactive risk controls will benefit most.
  • Casualty Insurance: Litigation trends and social inflation keep pressure on casualty lines, especially commercial auto and umbrella liability. Nuclear verdicts and expanded litigation funding drive severity, while technology like telematics and AI safety tools are becoming key differentiators for favorable outcomes
  • Professional & Executive Liability: Competition is improving, but emerging risks tied to AI adoption and regulatory scrutiny are reshaping underwriting. Cyber events increasingly overlap with management liability, making strong governance and compliance essential for broader coverage and stable pricing.
Access the Full Outlook Today

Get expert insights into market forces and strategies for success. Download the full 77-page report now.

Access the Report

 

 

Sponsored by ITL Partner: Zywave


ITL Partner: Zywave

Profile picture for user Zywave

ITL Partner: Zywave

Zywave delivers AI-powered growth engines for the insurance industry, enabling carriers, MGAs, agencies, and brokers to grow profitably, strengthen risk assessment, enhance client relationships, and streamline operations. Its intelligent, AI-driven platform acts as a performance multiplier for more than 160,000 insurance professionals worldwide, across all major segments. By combining automation, data insights, and best practices, Zywave helps organizations stay competitive and efficient in today’s fast-changing risk environment—empowering them to adapt quickly, scale effectively, and achieve sustainable growth.

For more information, visit zywave.com.

Additional Resources

Zywave recognized as a Leader in The Forrester Wave™: Insurance Agency Management Systems, Q4 2025 

Access Report

A Problem With Renters Insurance

Half of property owners fail to verify active renters insurance, leaving multifamily portfolios exposed to entirely preventable losses.

Empty Apartment with Sunlight Coming through a Large Window

In a recent survey of real estate investors and property owners, roughly half admitted that they don't verify whether their residents maintain renters insurance throughout the lease term. Those who do, rely on a mix of manual checks, carrier notifications or loosely integrated property management tools to track coverage. Both approaches leave portfolios exposed in ways that remain invisible — until a loss turns hidden risk into a real cost.

When a resident without an active policy causes a fire from an unattended candle or a faulty space heater, for example, the exposure falls on the operator. There's no clear recovery path. Just a loss, a dispute and a difficult conversation with ownership about how a lapsed policy went undetected for six months while the portfolio assumed it was covered.

This outcome is entirely preventable. When enforcing renters insurance is treated as a formality rather than an operational safeguard, multifamily owners and operators are exposed to significant risk and potentially expensive repairs. Renters insurance should be managed as part of the portfolio's overall risk strategy with the same consistency and oversight applied to any other source of financial exposure.

When Scale Creates Blind Spots in Protection

Survey data found that landlords with smaller portfolios of one to four units were more likely to require and enforce renters insurance. In contrast, those with larger portfolios of 20 or more units were significantly less likely to do so.

As multifamily portfolios grow, managing renters insurance enforcement becomes complex, and manual audits quickly become a liability. That risk compounds in ways that aren't always apparent until there's a loss or dispute.

At scale, compliance begins to break down in three key ways.

  • Inconsistent compliance enforcement across properties. Liability requirements in a portfolio lose force when individual sites enforce compliance differently. If one site grants exceptions but another follows a much stricter protocol, this inconsistency creates operational confusion. Plus, property staff turnover can create knowledge gaps and process changes. This erosion of compliance discipline increases the likelihood that a preventable lapse will become a reportable event.
  • Documentation gaps. It's not enough to have a renters insurance requirement in the lease. Operators must be able to show how they explained the requirement to the resident and when they were told. In multi-state portfolios, documentation is the difference between defensible policy and avoidable liability.
  • Technology stack drift. As portfolios grow, systems rarely remain uniform. Variations in property management system configurations, workflows and tracking methods across a portfolio make it more likely that policy lapses, missed renewals and incomplete documentation will go unnoticed. Fragmented data also limits oversight. If verification data lives in multiple places — or worse, in email inboxes and spreadsheets — operators can't see portfolio-wide status in real time. During an audit or post-loss review, inconsistent records make it difficult to demonstrate that coverage was consistently monitored.

Inconsistent enforcement, weak documentation and fragmented systems create exposure that is entirely preventable.

Strengthening Oversight at the Portfolio Level

Closing these gaps requires consistent management, but three practical shifts can make all the difference:

  1. Standardize enforcement protocols across every property. Portfolio-wide protection requires portfolio-wide consistency. That means the same requirements, exceptions process and documentation standards must be applied uniformly across every site.
  2. Automate verification and treat it as a continuing process, not just a move-in checkbox. Technology can help operators track the receipt, processing and review of certificates of insurance at the start of a lease. But confirming coverage at lease signing is only the starting point.

    Up to 40% of renters cancel their policies mid-lease, meaning a portfolio that only verifies at move-in is operating with a false sense of protection for a significant portion of its residents. At scale, that volume of documentation and continuing monitoring can only be managed reliably with technology. Automated tools can help operators continuously track coverage, flag lapses and prompt residents to reinstate when needed. Technology removes pressure on on-site staff to catch what falls through the cracks and creates a consistent, auditable record across the portfolio.

  3. Implement a tech-enabled solution to monitor resident coverage and auto-enroll residents with lapsed or canceled policies in a waiver program. Even in well-run portfolios, not all residents will obtain or maintain coverage. A property damage liability waiver program addresses this directly. When a resident's individual policy lapses or was never obtained, auto-enrollment in a waiver program ensures the resident isn't held personally liable for negligently causing certain damage to the unit, and that the property isn't on the line for the cost of the damage, either.

    The strongest programs also monitor certificates of insurance for residents who carry their own policy, processing renewals, flagging lapses and prompting reinstatement before gaps occur. Owners and operators should seek waiver programs that include 24/7 monitoring: a flood caused by an overflowing sink, even one day after a policy lapses, can result in the same costly damage as one that happens six months in. Continuous monitoring may be the difference between a covered loss and a big payout.

    Beyond protection, these programs can also generate revenue. Residents enrolled in a waiver program pay a fee, and operators can retain a portion of that fee after paying for the underlying insurance policy issued to the property and any third-party administrative costs. What starts as a compliance backstop can become a revenue line.

Keeping Gaps from Becoming Losses

For growing multifamily portfolios, renters insurance compliance is easy to underestimate. Risk stays quiet until something goes wrong. But once a policy lapses and a loss occurs, the financial and operational impact is immediate. Operators end up making repairs, managing disputes and absorbing costs that could have been avoided.

Real protection requires clear requirements, documented processes and continuing verification to reduce preventable losses and make recovery more predictable. At scale, where small compliance gaps cost real dollars, managing renters insurance compliance intentionally keeps those gaps from becoming losses.


Kelli Stiles

Profile picture for user KelliStiles

Kelli Stiles

Kelli Stiles is chief legal and insurance officer at Foxen.

Before joining Foxen, she spent eight years at Nationwide Insurance, most recently as AVP, associate general counsel for property & casualty development and distribution legal. Earlier in her career, she practiced for nearly 11 years at Jones Day, representing Fortune 500 companies in complex litigation and regulatory matters.

Enterprise Connectivity Is Becoming Critical

In 2026, fragmented systems and siloed workflows are no longer inefficiencies but competitive liabilities that constrain workforce adaptability.

Vibrant Abstract Circular Design in Blue and Pink

In 2026, a clear pattern is emerging across industries: disconnected systems, fragmented teams, and siloed workflows are no longer tolerable inefficiencies. They are now competitive liabilities. The next phase of enterprise transformation will not be defined by which company adopts the most tools or deploys the most AI pilots. It will be defined by which organizations can connect people, platforms, and processes into a coherent operating model that actually works at scale.

For years, digital transformation efforts focused on modernization. Cloud migrations, workflow automation, and analytics platforms promised efficiency and speed. Many delivered incremental gains. Yet few addressed the structural problem underneath: enterprises built digital layers on top of operational silos. As a result, employees still bounce between systems, data remains fragmented, and decision-making slows when it should accelerate.

The Hidden Cost of Disconnected Work

Most organizations underestimate how much friction disconnected systems create for their employees. Knowledge workers routinely spend hours each week navigating multiple platforms, re-entering data, and chasing approvals across departments. The result is not just lost productivity. It is cognitive overload. Employees are forced to manage the complexity of systems instead of focusing on higher-value work.

This friction has broader implications for retention and engagement. When work feels unnecessarily complicated, burnout accelerates. High-performing employees expect modern environments that enable them to collaborate seamlessly and move quickly. Organizations that fail to deliver this experience will struggle to attract and keep talent, particularly as younger professionals enter leadership pipelines with higher expectations for digital fluency and workflow simplicity.

Connectivity, in this context, is about removing obstacles between people and outcomes. It is about creating environments where information flows naturally, tasks move forward without constant manual intervention, and teams can operate with clarity.

Why Connectivity Is Now a Leadership Issue

Traditionally, integration efforts sat within IT departments. Leaders approved budgets, but execution was often isolated from business strategy. That approach no longer works.

As enterprises adopt more AI-driven tools, automation platforms, and distributed work models, the complexity of the environment increases. Without intentional orchestration, organizations risk creating ecosystems that are powerful on paper but unusable in practice.

Leaders in 2026 must treat connectivity as a core management responsibility. This means asking different questions: Are workflows designed around how employees actually work? Do teams have a single source of truth for critical data? Can new hires onboard without weeks of system training? Are frontline employees empowered with the same digital capabilities as corporate teams?

These are not technical considerations alone. They are cultural and operational decisions that shape how work happens every day.

Workforce Adaptability Depends on System Design

Adaptability is often framed as a human skill set. We talk about reskilling, upskilling, and continuous learning. While these remain important, adaptability is also shaped by the environment people operate within.

When systems are connected, employees can respond faster to change. They can access real-time information, collaborate across departments, and adjust workflows without waiting for manual handoffs. When systems are fragmented, even the most capable workforce becomes constrained.

In 2026, the most resilient organizations will be those that design their digital infrastructure to support rapid adaptation. This includes enabling cross-functional collaboration, reducing dependency on specialized gatekeepers, and allowing teams to reconfigure processes as business needs evolve.

Adaptability is not about working harder. It is about removing structural barriers that prevent people from working smarter.

Moving From Tool Accumulation to Platform Thinking

One of the biggest mistakes enterprises continue to make is equating progress with tool adoption. New platforms are added to solve specific problems, but rarely integrated into a broader operational framework. Over time, this creates digital sprawl that increases complexity instead of reducing it.

Platform thinking requires a shift in mindset. Rather than asking which tool to buy next, leaders must ask how systems interact, where data flows, and how users experience the entire environment. This approach prioritizes interoperability, standardized workflows, and shared data models.

It also requires governance that balances flexibility with structure. Teams should have autonomy to innovate, but within a connected framework that prevents fragmentation. The goal is not uniformity. It is coherence.

The Human Side of Enterprise Connectivity

Technology alone will not solve connectivity challenges. Organizations must invest in change management, communication, and leadership alignment. Employees need clarity on why new systems are being introduced and how they improve daily work. Managers need training to lead in connected environments where visibility increases and workflows become more transparent.

Trust plays a critical role. When systems are connected, performance data becomes more accessible. Used thoughtfully, this creates accountability and improvement. Used poorly, it creates surveillance and resistance. Leaders must establish norms that prioritize support, not control.

Why Insurance Cannot Afford Fragmentation in 2026

Nowhere is the cost of disconnected systems more visible than in the insurance sector. Carriers and brokers operate across policy administration platforms, claims systems, underwriting tools, CRM environments, and regulatory reporting frameworks that rarely communicate cleanly with one another. The result is delayed claims resolution, inconsistent customer experiences, manual reconciliation work, and increased operational risk.

As insurers adopt AI for fraud detection, pricing optimization, and customer service automation, fragmentation becomes even more dangerous. AI models depend on clean, connected data flows. Without unified infrastructure, insurers risk amplifying errors instead of improving outcomes. In 2026, competitive insurers will be those that connect underwriting, claims, compliance, and customer engagement into a single operational ecosystem that supports speed, transparency, and regulatory confidence at scale.

The Road Ahead

As 2026 unfolds, enterprises face a simple but demanding reality: disconnected operations cannot keep pace with the speed of modern business. Markets move faster. Customer expectations evolve rapidly. Talent demands better work environments.

Connectivity is the foundation for workforce adaptability, operational resilience, and sustainable growth.

Organizations that embrace this will create environments where people can focus on meaningful work instead of navigating complexity. Those that ignore it will find themselves constrained by systems that no longer serve their ambitions.

Insurance Struggles With Digital Friction

Clunky insurance experiences are now a competitive liability, driving customer churn and employee turnover in equal measure.

A Gradient Design

As usability expectations have been pushed to the max and user experience has become increasingly commoditized, the clunky and confusing experiences that still dominate insurance—both internally and in customer-facing products—have become more noticeable and less acceptable.

Customers feel it, and employees do, too.

Recent research by Insurify shows that one in four younger customers has switched insurance carriers due to frustrating digital interactions. At the same time, seven in 10 young employees say they would consider changing jobs for better workplace technology, according to a study published last year by Adobe.

These trends point to something many insurance organizations are already experiencing firsthand: poor digital experiences are no longer just a usability issue. They affect retention, operational efficiency, and ultimately competitive advantage. In other words, they are a liability.

And yet, despite years of investment in digital transformation, friction still defines many insurance interactions. Why does the industry continue to struggle here?

Why Insurance Experiences Still Lag Behind

Part of the explanation lies in how insurance experiences were created in the first place.

In many cases, what organizations call "digital products" are not truly designed experiences in the way, say, apps like Uber or Slack might be. They are more like digitized manual processes.

Over the past few decades, insurers have gradually translated manual workflows into software—policy administration, quoting, underwriting, claims management—often without fundamentally rethinking how those processes should work in a digital environment. New platforms, integrations, and features have been layered onto existing infrastructure over time, producing systems that reflect the history of the business rather than the needs of modern users.

Insurance also operates within an unusually complex ecosystem. Digital tools frequently need to support multiple audiences simultaneously: customers, agents, brokers, underwriters, customer service representatives, employers, benefits administrators, and third-party partners. Each group interacts with the same underlying systems but with different goals, responsibilities, and expectations.

When digital experiences are built within this environment without a clear design strategy, complexity has a tendency to surface directly in the interface. What should feel like a coherent system instead begins to resemble a collection of disconnected workflows and tools.

In practice, this friction tends to appear in a few clear ways.

Three Patterns of Friction

Across the insurance ecosystem—from consumer apps to broker portals to internal platforms—we frequently see friction emerge in three distinct forms.

These patterns are not necessarily the result of poor decisions or weak design teams. More often, they reflect structural realities within the industry itself. While understanding them doesn't fix anything, it does explain why so many digital experiences in insurance feel more difficult to use than they should—and can lead to design solutions that can resolve much of this friction.

Role friction

Insurance systems often serve a wide range of users at once. Customers may use the same platform that agents rely on for quoting or that underwriters use to evaluate submissions. In benefits ecosystems, carriers, employers, brokers, and employees may all interact with overlapping systems.

When experiences fail to account for these differences, it becomes difficult for people to understand what they are responsible for or what actions they are permitted to take. Workflows slow down, ownership becomes ambiguous, and teams begin to rely on manual coordination outside the system—emails, calls, spreadsheets—to move work forward.

Offering friction

A second type of friction emerges when products that are conceptually connected are delivered through disconnected experiences.

Insurance offerings often span multiple policies, services, or programs. A household may purchase auto, renters, and umbrella coverage from the same carrier. A broker may assemble a coverage package across several products. Employees may navigate benefits that combine insurance coverage with wellness programs or leave management services.

Although these offerings are experienced as part of a single relationship, they are frequently delivered through separate systems and workflows. From the user's perspective, what should feel like one cohesive service instead becomes a series of disconnected touchpoints.

Mission friction

A third type of friction arises when organizations themselves are not aligned on what a digital product is meant to accomplish. This is more common than you may think.

Insurance portals and applications often accumulate features over time as different teams add features to support their own goals—sales, servicing, compliance, reporting, relationship management. Without a clear shared vision and objective guiding the experience, these additions can gradually pull the product in competing directions.

For the people using these systems, the result is an experience that feels incoherent. Users may struggle to determine where to begin, which workflows are most relevant, or what the platform is ultimately designed to help them do.

Designing Through Complexity

The complexity that produces these forms of friction is not unique to any single insurer. It is a product of the industry itself. Insurance ecosystems involve multiple stakeholders, layered products, regulatory constraints, and long-standing organizational structures.

Because of this, the goal should not necessarily be to eliminate friction altogether. In many cases, some friction is necessary. Verification steps, disclosures, and safeguards often exist to protect customers and ensure that risk decisions are made responsibly.

The challenge is distinguishing between the kinds of friction that add value to users and the kinds that simply make systems harder to use.

Human-centered design plays an important role here because it shifts the starting point for digital experiences. Rather than organizing systems around internal structures or historical processes, it begins with the people who rely on those systems every day and the tasks they are trying to accomplish.

When digital products are designed with that perspective in mind, complexity does not disappear—but it can be absorbed and structured in ways that make the experience feel far more usable.

Looking More Closely at Friction in Insurance

In a recent report, we at Cake & Arrow took a deeper look at how these patterns of friction show up across B2C, B2B, and B2B2C insurance experiences. The report explores why these dynamics persist, how they shape day-to-day interactions with insurance systems, and how design teams can begin addressing them in practical ways.

The industry will likely never achieve completely frictionless experiences—and there are good reasons for this. But understanding where friction comes from is critical to designing syst

For a deeper exploration of these ideas and practical design solutions for reducing friction in digital insurance experiences, download our full report, Tackling Friction in Insurance Through Design.


Emily Smith Cardineau

Profile picture for user EmilySmith

Emily Smith Cardineau

Emily Smith Cardineau is the Director of Content & Insights at Cake & Arrow, a customer experience agency providing end-to-end digital products and services that help insurance companies redefine customer experience.

An Urgent Need for Post-Quantum Cryptography

Organizations delaying the shift to post-quantum cryptography face major risks, as classical encryption schemes may break.

A purple and blue abstract background with a purple and blue swirl

While researching the Titanic recently, I was struck by something profound: the ship received numerous warning signs that could have prevented the catastrophic disaster of 1912. More than a century later, organizations continue making the same mistake, ignoring blatant warnings about pending disasters.

Today's iceberg? The quantum computing revolution that threatens to render our current cryptography obsolete.

The Warning Signs Are Already Here

Any entity using digital networks to store sensitive data needs to move away from classical cryptography toward post-quantum cryptography (PQC) standards. Organizations that fail to course correct risk drifting dangerously off course by maintaining the same classical cryptography instead of implementing new quantum-resistant algorithms that are already available.

This lack of proactive course correction, or what I call "cryptographic drift," creates what is now referred to as cryptographic debt – a burden that builds up until it may be too late to avoid disaster. One of the other perspectives to understand is that adversaries are constantly harvesting your data during the cryptographic drift, and the slow implementation of PQC-resistant algorithms will ease the adversarial burden to decrypt the data once a cryptographically relevant quantum computer (CRQC) becomes operationally available. The Titanic didn't sink simply from drifting off course, but because it maintained high speed into a known ice field despite numerous warnings that never reached the captain. Everyone was too busy to act.

Sound familiar?

Understanding the Quantum Threat

Quantum computers harness quantum mechanical phenomena, including superposition and entanglement, to process information in fundamentally different ways from classical systems. While classical computers encode data as binary bits (0s and 1s), quantum computers use quantum bits (qubits) that can occupy multiple states at once, potentially delivering exponential speedups for specific problem classes.

Quantum computers using gate-based operations (analogous to classical and/or gates) have been built with dozens of qubits, though their quality remains inconsistent. Scaling to fully error-corrected systems with logical qubits that can perform substantially more operations likely won't arrive until around 2030. Organizational management needs to understand what lies ahead in the cryptographic space of quantum computing. Advanced planning is essential to implement quantum-resistant algorithms before a CRQC arrives on the scene.

The primary organizational risk from quantum computing is that a CRQC could break widely used classical encryption schemes. This threat has prompted formal government action, including OMB Memorandum M-23-02 (Migrating to Post-Quantum Cryptography) and National Security Memorandum 10 (NSM-10, Promoting United States Leadership in Quantum Computing While Mitigating Risk to Vulnerable Cryptographic Systems), which direct federal agencies to take steps toward post-quantum cryptography (PQC) migration. The Department of Defense has issued additional guidance outlining implementation requirements and constraints for PQC adoption across government systems.

Private sector organizations, particularly those working with or seeking to work with government entities, should closely monitor these directives, as compliance will likely become essential for maintaining those relationships.

Planning safeguards your organization against the threat of a CRQC rendering current public-key encryption such as RSA (Rivest, Shamir, and Adleman) and Elliptic Curve Cryptography (ECC) obsolete. It may also mitigate "harvest now, decrypt later" (HNDL) attacks – a continuing threat where adversaries intercept and store encrypted data today, intending to decrypt it once error-correcting quantum computers become capable of breaking today's cryptographic protections.

Recent academic and industry publications have accelerated the timeline for operational CRQCs to on or before 2030, exponentially increasing risk in three critical areas:

  • Business operations disruption
  • Data exposure and breaches
  • Cost of emergency transition

Most forward-thinking organizations are already shifting their encryption ahead of 2030, anticipating moderate impacts to these areas.

Organizations experiencing cryptographic drift will continue operating normally, creating a dangerous illusion of security while adversaries store sensitive data now and decrypt it later (also known as HNDL attacks)—capturing encrypted data today for future exploitation. A crypto-agile approach maintains operational continuity while moving to quantum-resistant algorithms that protect data in transit. As shown in the figure, cryptographic debt accumulates over time and can become overwhelming or irreversible as organizations scale, eventually leading to loss of operational functionality and relevance due to government mandates and guidance. Wholesale replacement of IT infrastructure is neither practical nor cost-effective for achieving quantum resistance. Instead, implementing crypto-agility enables seamless migration from obsolete encryption to quantum-resistant standards, positioning organizations for future competitiveness through reduced costs, accelerated transition timelines, minimized data compromise risk, and uninterrupted operations.

The Time to Act Is Now

My advice is simple: start changing course now.

The quantum-resistant/PQC algorithms have been released by the National Institute of Standards and Technology (NIST):

  • FIPS 203 (ML-KEM) - key encapsulation
  • FIPS 204 (ML-DSA) - digital signatures
  • FIPS 205 (SLH-DSA) - stateless hash-based signatures

These standards form the foundation of the post-quantum cryptography migration mandated by government directives like OMB M-23-02 and NSM-10.

Start by inventorying your assets to understand what encryption is currently being used within the organizational enterprise. Focus on migrating the highly operationally used assets (high value or high impact) to the standard quantum-resistant algorithms, as they most likely transmit most of your sensitive data. For now, the HNDL threat is at the data in transit level, not particularly at the data in use and data at rest levels.

Additionally, migrating from TLS 1.2 to TLS 1.3 can counter a CRQC due to PQC algorithms integrating more naturally into the TLS 1.3 architecture. This is available now!

Reactive Planning

Migrating only after it's too late and your cryptography has been rendered void by an error-correcting/fault-tolerant quantum computer will dramatically increase the risk of your organization ending up like the Titanic.

Side Note

It took 73 years to find the wreckage, and to date, the Titanic has never been fully recovered from the ocean floor. Let's try not to have that happen to your organization.

The warnings are here. The danger is real. The timeline is shorter than you think. There are mitigations out there now that can be implemented within your organization.

Don't be too busy to change course. Pay attention to the warnings.


Garfield Jones

Profile picture for user GarfieldJones

Garfield Jones

Dr. Garfield Jones is senior vice president of research and technology for QuSecure. 

Dr. Jones previously served as the associate chief of strategic technology for the Cybersecurity and Infrastructure Security Agency (CISA), DHS, where he led the agency’s post-quantum cryptography (PQC) initiative. Prior to joining DHS, Dr. Jones worked as a systems engineer developing complex weapons, geographic, and information systems for agencies such as Office of Naval Intelligence (ONI), National Geospatial Intelligence Agency (NGA), and the Naval Criminal Investigative Service (NCIS). 

In 2018, he retired from the Army Reserves after serving 25 years (16 years active duty and nine years reservist) as an information systems warrant officer.

Should Brokers Trust Their Insurtech Vendors?

A study finds that two-thirds of brokers believe insurtech vendors overstate ROI promises, revealing a significant trust gap in the industry.

A Building under a Cloudy Sky

Insurtech offers the promise of transformation, but new data suggest brokers appear skeptical. Findings from the 2026 Benevolent Insurtech Trust Index indicate that broker trust in insurtech and its vendors across several trust dimensions is not high.

Consider:

  • 67% of broker respondents believe insurtech promises of time savings, efficiency and ROI (return on investment) are overstated;
  • 22% of respondents feel that vendors are honest about features, pricing and implementation during the sales process;
  • 23% of respondents feel that vendors can be counted on to do what is right;
  • 9% of respondents agree that vendors have made sacrifices for them in the past.

Before going further, two methodological disclosures about the inaugural Benevolent Insurtech Trust Index report. First, 67 brokers from across Canada completed the survey. This sample size means results are indicative but not generalizable. Second, attitudes toward various categories of insurtech, including broker management systems (BMS), quoting/rating, email marketing, policy admin systems (PAS), and AI solutions, were used in the findings.

Three themes emerged from the study where trust is breaking down between brokers and insurtech vendors.

The ROI Credibility Gap

When two out of three respondents believe that vendor claims of time savings, efficiency and ROI are overstated, there is a trust gap.

This isn't to say there are no efficiencies or productivity gains that come from using insurtech. Not at all. In fact, 57% of respondents agree that tech adds value to their organization. What is being captured here is the distance between initial expectation and lived experience. It is the feeling that claims or representations of ROI and increased productivity are exaggerated or embellished.

The result is that broker respondents are less likely to take such statements at face value. They want proof. As one respondent stated, "Show me real concrete examples of where our brokerage will see ROI and provide me with contacts that we could follow up with."

Of course, the challenge with relationships is the interpretation of behavior. Humans are meaning-makers, and we assign intent to behavior. As one respondent stated, "So yes tech firms all overstate their ROI and what they can do for you because that's how they get the sale."

Which leads to a second theme from the study: honesty during the sales process.

A Sales Process Brokers Don't Fully Trust

Only 22% of respondents agreed that vendors were honest with them about features, pricing, and implementation during the sales process. As one respondent remarked, "Tech vendors in the insurance space suffer from the over-promise and under-deliver syndrome."

Over-promise. Under-deliver. Overstated claims of ROI. Is it fair to paint every insurtech with this brush? No. But it doesn't really matter.

What matters is the perception that embellishment takes place. Because this is the thought that sticks. It's what gets talked about on convention floors; the "dark social" conversations that can influence buying decisions. Brands and reputations are shaped during these interactions, far from the boardroom table or the shine of new marketing campaigns.

We trust those who we believe will be honest and vulnerable with us, bringing us to the third theme: self-interest and partnering.

Are we really partners?

Consider these two findings: 23% of broker respondents feel that vendors can be counted on to do what is right, and only 9% of respondents agree that vendors have made sacrifices for them in the past.

What do "sacrifices" have to do with economic relationships? Sacrifices are an indicator of partnering behavior, of a willingness to put the interests of the other before our own. What respondents are saying is that they feel vendors are more inclined to put their own interests first, ahead of customer interests. That is, they expect vendors to behave in a self-interested way.

Building trust: What brokers are asking for

Transparency in pricing. Honest product roadmap discussions. Realistic implementation timelines and deliverables. These topped the list of ways brokers suggested vendors improve trust. As one broker offered, "trust grows with insurtech when (vendors) stop overselling roadmap features."

In addition, providing realistic, validated claims about time savings, productivity gains and ROI would also go a long way to strengthening feelings of trust. The opportunity and responsibility are shared between marketing, sales and service to set these expectations.

It may take time and intentional effort, but trust can be rebuilt, especially when shared interests are aligned. One respondent offered a clear partnering view, "Real insurtech success isn't about disruption; it's about reliability, partnership, and making brokers better at serving clients."

Here is a link to the full 2026 Benevolent Insurtech Trust Index report.


Steve Pieroway

Profile picture for user StevePieroway

Steve Pieroway

Steve Pieroway is principal at Benevolent Marketing, a B2B insurtech marketing consultancy. 

He is a former insurtech executive, having held leadership roles with Policy Works, Applied Systems Canada, and Trufla. Prior to his insurtech career, Steve wrote a thesis titled, “An Identification-Based Relationship Marketing Model.”

Gig Workers Reshape Insurance Market

As gig workers untether from employer-sponsored benefits, insurers must reimagine underwriting and distribution for a decentralized workforce.

Hand Resting on Steering Wheel

Gig workers are doing more than just delivering a late-night burger. They are also redefining the insurance market.

For decades, insurance in the United States has been tightly linked to employment. Health coverage, disability insurance, workers' compensation and even retirement planning have traditionally flowed through the employer-employee relationship.

But that structure is no longer the only model.

The gig economy has moved from the margins to the mainstream. Ride-share drivers, freelance designers, delivery couriers, independent contractors, handymen, and housekeepers now represent a substantial share of the labor force.

By some estimates, gig work serves as the primary employment for nearly 30% of Americans. That estimate may fluctuate by methodology, but the message is clear. More workers are untethered from traditional employer-sponsored benefits.

For insurers, that shift presents both challenges and opportunity.

Structural challenges

Gig workers do not resemble traditional employee groups. Their income is often unstable and irregular. Schedules can change weekly or even daily. And risk exposures vary widely.

A ride-share driver faces auto liability and commercial-use limitations. Freelancers face professional liability exposures. Couriers have accident and asset-damage risks.

There is no single risk profile.

At the same time, many gig workers hesitate to lock into long-term financial commitments. Annual policies, auto-renew contracts, and complex benefit structures can feel misaligned with income volatility.

Because they are not traditional employees, many gig workers often lack policies including unemployment insurance, workers' compensation, disability benefits, and, importantly, employer-subsidized health insurance.

That leaves individuals to self-insure, go without coverage, or seek alternatives in the individual market.

For health insurance, the primary destination is the Affordable Care Act marketplace. But offering a policy on the marketplace alone does not solve the strategic challenge for insurers. Insurers need to focus on how to differentiate to a price-sensitive, digitally savvy, transient customer base.

Where insurers can compete

If gig workers are going to shop on the individual market, carriers must think beyond simply listing a compliant plan.

Some insurers are experimenting with allowing gig workers to form quasi-group pools that are similar in concept to co-ops. In some industries, trade associations offer tailored policies. While those aren't subsidized by an employer, they often do come at a discount.

Some gig platforms themselves have introduced limited coverage options, embedding insurance offerings directly into their ecosystems.

Other gig workers rely on traditional brokers to guide them through individual plan selection.

On certain lines, insurers are developing on-demand coverage. These policies activate by the hour or project. These models align more closely with how gig workers think about risk where they are tied to a task, not a calendar year.

Short-term health plans also enter the conversation. These promise affordability and flexibility. But they carry significant limitations in benefits, underwriting protections, and long-term stability, and their coverage often falls short of what Affordable Care Act-compliant policies offer. They can also come with punishing pre-existing condition restrictions.

Strategic adjustments for insurers

The gig economy is not a monolith. A ride-share driver, a freelance consultant, and a home-repair contractor do not have identical risk profiles. Insurers that treat gig workers as a single market will struggle.

Instead, carriers should:

  • Identify professional subgroups and underwrite accordingly
  • Customize policy structures to reflect income volatility
  • Craft messaging that emphasizes portability and flexibility
  • Select digital-first distribution channels where gig workers already operate

The broader point is that insurance has historically relied on employment as the organizing principle for risk pooling. As work becomes more decentralized, insurers must build new organizing principles, especially ones centered on profession, platform, behavior, and usage.

Carriers that adapt their products, underwriting, and engagement strategies accordingly will be positioned to serve a workforce that is no longer defined by the W-2.

AI Deepfakes Drive Surge in Insurance Fraud

Deepfakes and AI-generated fraud are infiltrating claims intake, pushing carriers to deploy homeland security-grade biometric verification tools.

Close Up Shot of a Black Smartphone

While AI promises unprecedented speed and efficiency for insurers, it also equips bad actors with a dangerous new arsenal. Today, the barrier to entry for complex fraud is lower than ever, with "synthetic fraud"—driven by deepfakes and AI-generated identities—becoming one of the most critical risk management challenges facing carriers.

The Threat Landscape: Deepfakes and Identity Theft

Fraudsters are no longer relying merely on staged accidents or exaggerated injuries. They are using generative AI to fabricate reality. From cloning the voices of policyholders to generating hyper-realistic images of vehicle damage that never occurred, the intake pipeline is under siege.

  • Deepfake Audio & Video: Scammers use synthetic voice cloning to bypass call center authentication, impersonating policyholders to redirect payouts or authorize fraudulent claims.
  • Fabricated Evidence: AI image generators can seamlessly doctor photos, adding severe structural damage to an otherwise pristine vehicle, or placing a vehicle at a fake accident scene.
Real-World Case Studies

The financial impact of synthetic media is not hypothetical; it is already costing organizations millions.

  • The Global Impersonation Threat: In early 2024, a finance worker at the multinational engineering firm Arup in Hong Kong was duped into transferring $25.6 million. The fraudster used deepfake video technology to impersonate the company's chief financial officer and several colleagues on a live video call.

If corporate finance can be breached this convincingly, automated First Notice of Loss (FNOL) systems are prime targets.

  • The Auto Fraud Spike: Major P&C insurers, including Allianz and LV=, recently reported a staggering 300% increase in claims containing AI-manipulated vehicle images and falsified documents. "Shallowfakes" (basic image splicing) and deepfakes are increasingly being used to inflate repair costs and claim total losses on non-existent damage.
Borrowing Defenses from Homeland Security

To combat military-grade deception, carriers are adopting defense mechanisms originally pioneered by the homeland security and border control sectors.

  • Biometric Liveness Detection: Just as the U.S. Customs and Border Protection (CBP) uses active facial biometric comparison (via their Traveler Verification Service) to ensure travelers are who they say they are, insurers are implementing these tools. This ensures the person filing the claim is a live, physically present human, rather than a 2D photo or AI-injected video stream.
  • Deep Metadata & Forensic Cross-Checking: Security agencies use complex geospatial and cryptographic analysis to track threats. Insurers can apply similar logic to verify the digital provenance of an image, checking light patterns, compression artifacts, and GPS coordinates to ensure a photo wasn't generated in a server room thousands of miles away.
The Solution: A Fortified, Intelligent Intake Pipeline

To safely leverage AI for faster processing without opening the floodgates to fraud, carriers need a solution that inherently distrusts and verifies every piece of intake data.

Cutting-edge intake platforms act as a real-time, forensic gatekeeper. Here is how top insurers will be securing the pipeline while accelerating the customer experience:

1. Scene-Level Image Capture: The platform ingests photos directly from the accident scene, immediately analyzing the metadata and image composition for signs of AI tampering or digital manipulation.

2. Audio, Video or Text Description Recording: Capture the user's own description of the incident. This allows for both voice biometric validation (preventing cloned audio injections) and stress/sentiment analysis, as well as a variety of cross references.

3. Behind-the-Scenes Cross-Checking: The system triangulates the visual damage, the spoken narrative, and historical data. It flags inconsistencies—such as a narrative that doesn't match the physics of the visual damage, or geolocation data that conflicts with the reported address.

4. Accelerated Adjudication: By filtering out high-risk synthetic fraud at the source, the system empowers adjusters to make faster, confident decisions on legitimate claims—automating approvals, estimating loss amounts, and instantly routing vehicles for total loss vs. repairable workflows.

The synthetic era of fraud is already here. By integrating homeland security-grade verification into a seamless digital intake process, carriers can protect their bottom line while delivering the fast, frictionless resolutions their honest policyholders expect.

References & Sources:
  • Hong Kong Deepfake Scam ($25.6M): Incident involving multinational engineering firm Arup. Detailed via FM Magazine and the AI Incident Database.
  • 300% Increase in Auto Fraud: Reports from major insurers regarding the spike in "shallowfake" and deepfake AI-manipulated images. Cited via Allianz UK, The Bateman Group / LV Insurance, and The Zebra.
  • Homeland Security Biometrics: Information on U.S. Customs and Border Protection (CBP) biometric liveness and Traveler Verification Service. Sourced from CBP.gov.

Eliron Ekstein

Profile picture for user ElironEkstein

Eliron Ekstein

Eliron Ekstein is co-founder and CEO at RAVIN AI, a deep technology platform that assists insurers and fleets in identifying damage and managing claims.

Prior to RAVIN, Ekstein founded FarePilot, a London-based startup using big data to predict demand for taxis and ride sharing. He was also director of new business development at Shell Energy's Digital Ventures group and mentored multiple technology companies at TechStars and other platforms. 

He has an MBA from London Business School.

The Long View on Insurance's Transformation

To understand where insurance is heading, look at the history of computing — from batch processing to today's instant-answer capabilities. 

Image
Futuristic sky

I often tell people I've been watching the same movie for decades — it will be 40 years this fall since I started covering IBM as a young pup of a reporter at the Wall Street Journal. I've watched the disruption that hit IBM spread to the rest of the computer industry, then to commerce in general, thanks to the personal computer, internet, search engines, smartphones and now AI. 

Having watched the movie so often, I have a pretty good sense of how today's story lines will play out.

Today, I'll start even earlier than 1986 and offer a quick history of computing because I think the long view provides useful perspective on where insurance is — and where it's going. Some insurance processes are firmly stuck in the 1950s and 1960s, when batch processing was the only game in town. Others have made it to the 1980s and 1990s, with their PCs and networking. Still others are becoming fully modern, as they take advantage of mobile devices and generative AI.

On the theory that every industry is becoming a technology industry, insurers will eventually catch up on all fronts. Understanding where we lag the most and imagining a world where insurance can operate at the speed of Amazon will, I hope, provide a road map that will help us get to that future faster.

So, yes, I've set myself a rather ambitious goal this week.

To understand the starting point for computing (and insurance), think of my college roommate Mike. He was a computer science major, so he was wedded to the campus mainframe. He'd type out a program on a stack of punch cards, hand them in at the window in the computer center... and wait. When his turn finally came on the mainframe, he'd get a printout with the results. Given the complexity of what he was doing, and that even a typo would derail things, he inevitably had errors. So he'd debug the program, type out some more punch cards, turn them in at the window... and wait some more. 

Because turnaround times were shorter at night, after most students had gone back to their rooms, Mike typically stayed out into the wee hours of the morning, napping on a table while waiting for his latest printout. (The way our habits meshed led to a comical relationship, where we sometimes didn't see other while both were awake for weeks at a time. I'd leave in the morning while he was asleep and, after working a job, not get back until he'd left for the computer center in the evening. He went home on weekends to see his girlfriend, so I'd sometimes find myself asking mutual friends, "Hey, how's Mike? I haven't talked to him in ages. Tell him I said hi.")

Mike's travails were a holdover from the era of batch processing, when a computer could do only one thing at a time. Big efforts, such as processing payroll or reconciling accounting records, were done in a single batch at a time reserved on the mainframe. Mike's programs obviously weren't on anything like accounting's scale, but he still had to run a program in a single batch of cards and wait his turn. 

Even though computing technology has improved by orders of magnitude since Mike and I were in college, a lot of business still operates at the speed of batch processing. You have a meeting on some issue, and a question comes up. Someone is assigned to do some analysis and comes back a week or two or three later with an answer. The issue is discussed again, and another question arises. More analysis over more weeks ensues. The batch processing influence is even stronger in insurance than in most industries because there is so very much data to analyze.

Computer scientists saw early how much better interactive computing would be and spent decades getting us there. By the '60s and '70s, time-sharing became possible. The setup was awkward: You had a keyboard and printer but had to type out a program on special tape that you fed into the machine, and turnaround times were painfully slow because you were queueing up behind all the programs running on a distant mainframe or minicomputer. But time-sharing spread the power of computing far beyond the walls of the data center. (Bill Gates got his career started on a time-sharing terminal at his high school. I, too, had access to a terminal in high school but somehow didn't do as much with it as he did. Alas)

By the late 1970s and into the 1980s, Xerox PARC had worked its magic, and the Apple II and then the IBM PC were putting real power on individuals' desktops. The computers delivered big benefits to business because of the electronic spreadsheet but otherwise proved to be rather limited when used in isolation. Fortunately, Xerox took care of that issue, too, with the Ethernet networking standard that let businesses link their in-house computers. Then the internet took networking into the stratosphere thanks to the World Wide Web's debut in 1989 and the Mosaic browser in 1993. By the late 1990s, search engines were doing a good job of fulfilling Google's goal "to organize the world's information and make it universally accessible and useful." Then smartphones, led by the iPhone debut in 2007, put all the computing power and information in our hands. Generative AI is now letting us gather, process and use far more of the world's data than we humans could ever do on our own.

Big tech has taken advantage of the remarkable progression of technology to gather all sorts of signals about individuals (many of which I wish they didn't have) and target us with ads, with memes that keep us engaged, with dynamic pricing that maximizes their clients' revenue. Progress in other spheres is more uneven, but you can look at big retailers like Amazon and Walmart and see how they sense demand and respond to it in real time.

I'd say insurance has done a so-so job of taking advantage — acknowledging that our situation is complicated by heavy regulation and by the confusion of state-by-state oversight in the U.S. A lot of insurance work is still in a sort of batch mode — the analysis of loss runs, actuarial tables, and so on. While insurers have taken advantage of all the power on the desktop that PCs provide, I'm not sure we've done the best job of internal networking — why, for instance, isn't claims data always fed in real time to underwriters to inform future decisions? Insurers certainly haven't been great about taking advantage of all the information that's out there beyond their four walls; they're starting to figure out what data to trust and how to absorb it, but they've been slow. Insurers are also still figuring out what to do about smartphones. Yes, every company has an app these days, but my impression is that customers still want to be able to do a lot more self-service via phones than is possible today.

I'll withhold judgment on how insurance is doing on gen AI. We're headed in some good directions by gathering and doing initial processing for those in claims, underwriting and agencies, but we clearly haven't figured gen AI out — yet nobody has, so we're in good company. 

The nice thing is that, whatever our inadequacies to this point,  our version of the technology movie can have a happy ending for two reasons. One is that any new computer technology builds on everything that's come before in an exponential way. We're not just adding a gen AI capability alongside an information or networking capability. The capability increases by some exponent what was ushered in by smartphones, which raised what came before to some power, after it did the same to everything that came before that. The second reason is that we don't have to build the capability. The tech giants have done that over the past 75 years; we just have to take advantage. They're not done yet, either: The latest figure I saw is that the five biggest AI companies are investing $700 billion on infrastructure in this year alone

To me, the happy ending will come in a decade or so, when insurance can fully switch from batch processing to what I think of as conversational computing. You don't have a question in a meeting and send someone off to study the issue for weeks. You ask a question, and your AI uses all the internal and external information available to provide an answer. Loss runs and actuarial tables don't require massive studies. You converse with your computer and get the answers you need.

You can see glimmers of this sort of conversational future in some things going on today. Continuous underwriting is one great example. Why wait for an annual review of a policy when aerial imaging can tell you that a homeowner has added a pool, when an AI monitoring the internet can tell you that a restaurant has added a drinks menu or delivery options, etc.? Why not take advantage of the ability to sense what's going on among clients and prospects and respond? 

Embedded insurance is another example. Why should selling an insurance policy always be a formal project? Why not just use the ability to sense when a customer might want coverage and respond?

Technology never stops moving. Moore's law made sure of that for decades, with what became a sort of mandate for semiconductor makers to double the power of a chip every year and a half to two years at no increase in cost, and other forces, such as AI, are now amplifying those gains in capability by orders of magnitude. I figure I've gone through six tech revolutions since I debuted on the computer beat in 1986, and we could be in the middle of the next one, with agentic AI.

For insurers, I hope a look at the history of computing identifies some spots where we can and should improve. But I mostly hope the history shows us that we're headed toward a conversational future, where we ask questions and get answers in real time — and hope insurers will construct road maps toward that future so every incremental decision on IT can keep us moving in the right direction. Just imagine what insurance could look like at the speed of Amazon.

Cheers,

Paul