Tag Archives: naic

AI and Discrimination in Insurance

This past summer, a group of African-American YouTubers filed a putative class action against YouTube and its parent, Alphabet. The suit alleges that YouTube’s AI algorithms have been applying “Restricted Mode” to videos posted by people of color, regardless of whether those videos actually featured elements YouTube restricts, such as profanity, drug use, violence, sexual assault or details about events resulting in death. The lawsuit alleges that this labeling has occurred through targeting video keywords like “Black Lives Matter,” “BLM,” “racial profiling,” “police shooting” or “KKK.” YouTube says its algorithms do not identify the race of the poster.

Whether the allegations are true or not, the case illustrates AI’s potential for inadvertent discrimination. It is easy to see how an algorithm could learn to use variables seemingly unrelated to race, sex, religion or another protected class to predict the outcomes it was designed to target. In the YouTube example, we could imagine the algorithm noting a link between the mentioned keywords and videos depicting violence, thus adding the keywords to factors it weighs when deciding whether Restricted Mode should be applied to a given video. The algorithm is simply programmed to restrict sequences containing violence, but in such a situation it could end up illegally restricting videos posted by African-American activists that depict neither.

In response to such potential pitfalls, the NAIC this past August issued a set of principles regarding AI. The set includes principles about transparency, accountability, compliance, fairness and ethics. The only way to ensure compliance, fairness and that ethical standards are maintained is for AI actors to be accountable for the AI they use and create — and the only way for these actors to properly monitor their AI tools is by ensuring transparency.

As Novarica’s most recent joint report with the law firm Locke Lord on insurance technology and regulatory compliance notes, all states follow some version of the NAIC’s Unfair Trade Practice Act (“Model Act”), “which prohibits, generally, the unfair discrimination of ‘individuals or risks of the same class and of essentially the same hazard’ with respect to both rates and insurability.” There are many possible insurance use cases that AI and data-based technology enable, like analytics-driven targeting, pre-underwriting, rules-based offer guidance and pre-fill data. Although these capabilities can be delivered without AI, the effort required to do so has historically been prohibitive, meaning that using AI will be essential in the coming years — as will ensuring that AI does not discriminate against protected classes.

A key area for insurers to monitor is the use of third-party data in underwriting processes that may not be directly related to the risk being insured. A good example of this is credit score, the use of which several states have restricted during the pandemic. NYDFS’s Circular No. 1 lists other external consumer data and information sources for underwriting that have “the strong potential to mask the forms of [prohibited] discrimination… Many of these external data sources use geographical data (including community-level mortality, addiction or smoking data), homeownership data, credit information, educational attainment, licensures, civil judgments and court records, which all have the potential to reflect disguised and illegal race-based underwriting.” Insurers must thus have transparency into what factors an algorithm is considering and how it arrives at decisions, and they must be able to adjust the included factors easily.

What will the regulatory future hold? Benjamin Sykes of Locke Lord foresees new model regulations requiring data to be subject to regular calls on underwriting criteria and risk-scoring methods, certification by insurers that the proper analysis to avoid any material disparate impact has been performed and a penalty regime focused on restitution above and beyond the difference in premium to those hurt by an algorithm’s decisions.

CIOs will need to consider how to handle the evolution of various regulations as they arise and their implications for how third-party data is used, how machine-learning algorithms are developed and applied and how AI models “learn” to optimize outcomes. Both the regulations and the technology are moving targets, so CIOs and the insurers they represent must keep moving, too.

Innovating Our Way Out of a Crisis

There are a few things we may never see again thanks to COVID-19. Free samples at the grocery store, infrequent cleaning of subway cars, cramped conference rooms, packed elevators, communal office pies and cakes, stigma around sick days and working from home, plastic ball pits for toddlers, buffet restaurants, paper menus, shaking (too many) hands and unpackaged dinner mints (okay, maybe those aren’t a thing any more), to name a few. These probably were not great ideas in the first place, at least not during flu season.

There would seem to be equivalents in insurance regulation – wet signatures, certain paper consumer notices, hard-copy regulatory filings, notarization and in-person continuing education, examinations and office requirements. Insurance regulators across the country have temporarily waived, on an emergency basis, certain of these requirements to varying degrees during the pandemic, to allow licensees and regulators to continue to serve consumers consistent with public health concerns. 

Various states have also waived certain legal and regulatory impediments to real-time insurance transactions. For example, states have permitted temporary termination and limited extensions of coverage and mid-term retroactive refunds and other premium adjustments that more accurately reflected underwriting risk during certain stages of the pandemic.

Recognizing some of these measures could be extended post-COVID without any material impact on state regulatory oversight, the NAIC Innovation and Technology (EX) Task Force and the Innovation and Technology State Contacts recently requested comments to support making permanent various “regulatory relief” and “regulatory accommodations” related to innovation and technology. Read comments from this author and other stakeholders. 

This request for comments is part of a larger effort by the NAIC and individual state insurance regulators to modernize various legal requirements and to encourage and facilitate innovation. Among other areas of focus, we’ve seen it in amendments to the NAIC Unfair Trade Practices Model Act concerning rebating and inducements. These changes, once adopted in the various states, will expressly permit licensees to provide certain loss prevention/mitigation and other value-added services to consumers at no charge or at a discount, without violating the anti-rebating and anti-inducement prohibitions. (California, Illinois and New York already permit these practices.) The NAIC’s commitment to facilitating innovation seems genuine and reliable, having completed this work over the past several months despite the pandemic. But the NAIC also needs continuing and consistent support from its individual state insurance commissioner members and their respective staff. And any meaningful and lasting innovation cannot be accomplished without cooperation and equal participation and commitment from the National Council of Insurance Legislators and its legislator members.

These waivers, accommodations and other relief measures are most welcome. By default, 2020 has served as a pilot program of sorts in which such additional regulatory flexibility was tested in many states. The country appears to have passed the test with flying colors—there has been no discernible negative effect on either of the two pillars of state insurance regulation – solvency regulation and consumer protection. 

See also: How to Outperform on Innovation

Can we build back better? In addition to removing unnecessary regulatory requirements and processes, state insurance regulators can facilitate innovation by expediting rate and form filings and expanding file and use and filing exemptions and access to the excess and surplus lines market. These extra steps will more adequately and efficiently address consumer demand for products tailored to their coverage needs in something closer to real time. Recognizing state insurance regulatory resources are already thin, and regulators are already overworked and underpaid, it should not be controversial to suggest industry would provide the financial support necessary for state insurance departments to obtain additional resources, including much-needed expertise around the use of technology and big data in rating and underwriting.

Any requirement, process, delay or extra regulatory cost that does not arguably serve either insurer solvency or consumer protection should be on the table for permanent retirement. It’s time. Before the next pandemic (or extension of this one) and the crisis after that.

Now, let’s share a pie [maybe some mints?] (and a handkerchief) in the elevator on our way to the [holiday?] buffet, and then hold hands and jump in the ball [mosh?] pit!! Who’s with me???

What Should Future of Regulation Be?

It is of course much easier to look back and second-guess regulatory actions. It is far more difficult to propose a way forward and to do so in light of the emerging hot-button issues, including data and the digitization of the industry, insurtech (and regtech), emerging and growing risks, cyber, the Internet of Things (IoT), natural catastrophes, longevity and growing protectionism. The way forward requires consideration of the primary goals of insurance regulation and raises critical questions regarding how regulators prioritize their work and how they interact with one another, with the global industry and with consumers.

We offer below some thoughts and suggestions on these important questions and on how regulation might best move forward over the next 10 years.

Establish a reasonable construct for regulatory relationships.

Relationships matter, and it is imperative for there to be careful consideration of how regulators organize their interactions and reliance on each other. We have some examples in the form of the Solvency II equivalence assessment process, the NAIC’s Qualified Jurisdiction assessment process (under the U.S. credit for reinsurance laws), the NAIC’s accreditation process for the states of the U.S., the U.S.-E.U. Covered Agreement, ComFrame, the IAIS and NAIC’s memorandum of ynderstanding and the IMF financial sector assessment program (FSAP). Each of these provide varying degrees of assessment and regulatory cooperation/reliance.

These processes and protocols, however, have largely emerged on an ad hoc, unilateral basis and in some cases have had a whiff of imperial judgment about them that may not be justified – and certainly is off-putting to counterparties. We would urge regulators to give careful consideration to the goals, guiding principles and the process for achieving greater levels of cooperation and reliance among global regulators.

We hope these efforts would include an appreciation that different approaches/systems can achieve similar results that no jurisdiction has a monopoly on good solvency regulation. There must also be respect for and recognition of local laws and a recognition that regulatory cooperation and accommodation will benefit regulators, the industry and consumers. Most importantly, regulators need to work together to develop confidence and trust in one another.

The IAIS first coined the phrase “supervisory recognition” in 2009. In March of that year, the IAIS released an “issues paper on group-wide solvency assessment and supervision.” That paper stated that:

“To the extent there is not convergence of supervisory standards and practices, supervisors can pursue processes of ‘supervisory recognition’ in an effort to enhance the effectiveness and efficiency of supervision. Supervisory recognition refers to supervisors choosing to recognize and rely on the work of other supervisors, based on an assessment of the counterpart jurisdiction’s regulatory regime.”

See also: Global Trend Map No. 14: Regulation  

The paper noted the tremendous benefits that can flow from choosing such a path:

“An effective system of supervisory recognition could reduce duplication of effort by the supervisors involved, thereby reducing compliance costs for the insurance industry and enhancing market efficiency. It would also facilitate information sharing and cooperation among those supervisors.”

This is powerful. We urge global insurance regulators to take a step back and consider how they can enhance regulatory effectiveness and efficiency by taking reasonable and prudential steps to recognize effective regulatory regimens − even where these systems are based on different (perhaps significantly different) rules and principles, but which have a demonstrated track record of effectiveness.

As noted above, we have seen some efforts at supervisory recognition. These include Solvency II’s equivalence assessment process, the NAIC’s accreditation process for other U.S. states, the NAIC “Qualified Jurisdictions” provisions for identifying jurisdictions that U.S. regulators will rely on for purposes of lowering collateral requirements on foreign reinsurers, the E.U.-U.S. Covered Agreement and the IAIS’s Memorandum on Mutual Understanding. Some of these processes are more prescriptive than others and have the danger of demanding that regulatory standards be virtually identical to be recognized. This should be avoided.

One size for all is not the way to go.

The alternative approach to recognition of different, but equally effective systems is the pursuit of a harmonized, single set of regulatory standards for global insurers. This approach is much in vogue among some regulators, who assert the “need for a common language” or for “a level playing field” or to avoid “regulatory arbitrage.” Some regulators also argue that common standards will lead to regulatory nirvana, where one set of rules will apply to all global insurers, which will then be able to trade seamlessly throughout all markets.

There are, however, a variety of solvency and capital systems that have proven their effectiveness. These systems are not identical, and indeed they have some profoundly different regulatory structures, accounting rules and other standards such as the systems deployed in the E.U. (even pre-Solvency II), the U.S., Canada, Japan, Bermuda, Australia, Switzerland and others. Attempting to assert a signal system or standard ignores commercial, regulatory, legal, cultural and political realities.

Moreover, we question some of the rationale for pursuing uniform standards, including the need for a common language. We suggest that what is really needed is for regulators to continue to work together, to discuss their respective regulatory regimes and to develop a deep, sophisticated knowledge of how their regimes work. From this, trust will develop, and from that a more effective and efficient system of regulation is possible. The engagement and trust building can happen within supervisory colleges. We have seen it emerge in the context of the E.U.-U.S. regulatory dialogue. We saw it in the context of the E.U.-U.S. Covered Agreement. No one, however, has made a compelling case for why one regulatory language is necessary to establish a close, effective working relationship among regulators.

Similarly, the call for a level playing field sounds good, but it is an amorphous, ambiguous term that is rarely, if ever, defined. Does the “playing field” include just regulatory capital requirements? If so, how about tax, employment rules, social charges? How about 50 subnational regulators versus one national regulator? Guarantee funds? Seeking a level playing field can also be code for, “My system of regulation is heavier, more expensive than yours, so I need to put a regulatory thumb on the scales to make sure you have equally burdensome regulations.” This argument was made for decades in the debate surrounding the U.S. reinsurance collateral rules. We hear it now regarding the burdens of Solvency II. It must be asked, however, whether it is the responsibility of prudential regulators to be leveling playing fields, or should their focus be solely on prudent regulatory standards for their markets.

Finally, the dark specter of regulatory arbitrage is often asserted as a reason to pursue a single regulatory standard, such as the development of the ICS by the IAIS. But one must ask if there is really a danger of regulatory arbitrage today among global, internationally active insures? Yes, a vigilant eye needs to kept for a weak link in the regulatory system, something the IMF FSAP system has sought to do, supervisory colleges can do and the IAIS is well-equipped to do. But using regulatory arbitrage as an argument to drive the establishment of the same standards for all insurers does not seem compelling.

Proportionality is required.

Often, regulators roll out new regulatory initiatives with the phrase that the new rules will be “proportionate” to the targeted insurers. Too often, it seems there is just lip service to this principle. Rarely is it defined – but it is tossed out in an attempt to say, “Do not worry, the new rules will not be excessive.” Greater debate and greater commitment to this principle is needed. Clearly a key component of it must be a careful cost/benefit analysis of any proposed new standard, with a clear articulation of the perceived danger to be addressed – including the likelihoods and severity of impact and then a credible calculation of the attendant costs – economic and otherwise to industry and to regulators. In October 2017, the U.K. Treasury Select Committee published a report criticizing the PRA for its excessively strict interpretation of Solvency II and its negative effect on the competitiveness of U.K. insurers. The report concluded that the PRA had enhanced policyholder protection at the expense of increasing the cost of capital for U.K. insurers, which hurt their ability to provide long-term investments and annuities. Although the PRA emphasized its mandate of prudential regulation and policy holder protection, the Treasury Committee reiterated its concern with how the PRA interpreted the principle of proportionality.

Simplicity rather than complexity.

Over the past 10 years, there has been a staggering increase in proposed and enacted regulatory requirements, many of which are catalogued above. There is a danger, however, that increasingly complex regulatory tools can create their own regulatory blind spots and that overly complex regulations can create a regulatory “fog of war.”

Andrew Haldane, executive director at the Bank of England, in August 2012 delivered a paper at a Federal Reserve Bank of Kansas City’s economic policy symposium, titled “The Dog and the Frisbee.” He graphically laid out when less is really more by talking about two ways of catching a Frisbee: One can “weigh a complex array of physical and atmospheric factors, among them wind speed and Frisbee rotation” − or one can simply catch the Frisbee, the way a dog does. Complex rules, Haldane said, may cause people to manage to the rules for fear of falling in conflict with them. The complexity of the rules may induce people to act defensively and focus on the small print at the expense of the bigger picture.

Focusing on the complexity of the banking world, Haldane compared the 20 pages of the Glass-Steagall Act to the 848 pages of Dodd-Frank together with its 30,000 pages of rulemaking, and compared the 18 pages of Basel 1 to the more than 1,000 pages of Basel III. The fundamental question is whether that additional detail and complexity really adds greater safety to the financial system or has just the opposite effect and significantly increases the cost. Haldane’s analysis provides compelling evidence that increasing the complexity of financial regulation is a recipe for continuing crisis. Accordingly, Haldane calls for a different direction for supervisors with “…fewer (perhaps far fewer), and more (ideally much more) experienced supervisors, operating to a smaller, less detailed rule book.”

Although Haldane’s analysis and discussion focuses on the banking system, his assessment and recommendations should be considered carefully by global insurance regulators. The sheer volume and complexity of rules, models and reports that flood into regulatory bodies raise the real question of who reviews this information, who really understands it and, worst of all, does a mountain of detailed information create a false confidence that regulators have good visibility into the risks – particular the emerging risks – that insurers are facing? A real danger exists of not seeing the forest for the trees.

See also: To Predict the Future, Try Creating It  

Regulation should promote competitiveness rather than protectionism.

At a time when competition has been growing not only from within the established companies but also more importantly from outside the traditional companies, protectionism will only inhibit growth and stifle better understanding of risk in a rapidly changing business environment. The goal must be to make the industry more competitive and to encourage transfer of innovation and create better ways to address risk, distribution of products and climate changes. Protectionism will only limit the potential of growth of the industry and is both short-sighted and self-defeating.

Recognition of the importance of positive disruption through insurtech, fintech and innovation.

The consensus is that the insurance industry is ripe for disruption because it has been slow (but is now working hard) to modernize in view of an array of innovative and technological advancements. Equally, regulators are trying to catch up with the rapid changes and are trying to understand the impacts through sandbox experiments and running separate regulatory models. The pace is fast and presents challenges for the regulators. Solvency and policyholder protection remain paramount, but cybersecurity, data protection, artificial intelligence and the digital revolution make advancements every day. Where this will lead is not clear. But changes are happening and regulators must work to understand the impact and need to calibrate regulatory rules to keep up with the industry and encourage innovation.

Regulation must be transparent.

Too often, regulation is drafted in times of crisis or behind closed doors by regulators believing they know better how to protect policy holders and how to prevent abuse of the system. As we have said, getting it right matters. A strong and healthy industry is the best way to protect consumers and policy holders. Industry engagement is essential and acknowledging and actually incorporating industry’s views is critical. This is particularly true given the dramatic changes in the insurance sector and the need to adopt regulation to new economics, business practices and consumer needs and expectations

This is an excerpt from a report, the full text of which is available here.

How to Speed Up Product Development

The traditional product development cycle in property and casualty insurance moves at a snail’s pace. Drafts, approvals, revisions, verifications of key details and other steps place months between the moment a product is envisioned and the day it becomes available to customers.

As technology speeds the pace of daily life and business, the traditional product development cycle continues to represent a drag on P&C insurers’ efficiency and bottom line. Here, we discuss some of the biggest pain points in the product development cycle and ways to boost speed without sacrificing quality.

Cycle Slowdown No. 1: Outdated Processes

During the last few decades of the 20th century and into the 21st, speeding up the product development cycle wasn’t on most P&C insurers’ to-do lists, Debbie Marquette wrote in a 2008 issue of the Journal of Insurance Operations. Using the fax and physical mail options of the time kept pace with the as-needed approach to product development.

Marquette noted that in previous decades, product development not only involved a team, but it often involved in-person meetings. “It was difficult to get all the appropriate parties together for a complete review of the product before the filing,” Marquette wrote, “and, therefore, input from a vital party was sometimes missed, resulting in costly mistakes, re-filing fees and delays in getting important products to market before the competition.”

In the 1990s, the National Association of Insurance Commissioners (NAIC) realized that the rise of computing required a change in the way new insurance products were filed and tracked. The result was the System for Electronic Rate and Form Filing (SERFF).

SERFF’s use rose steadily after its introduction in 1998, and use of the system doubled from 2003 to 2004 alone, according to a 2004 report by the Insurance Journal. By 2009, however, SERFF’s lack of full automation caused some commentators, including Eli Lehrer, to question whether the system needed an update, an overhaul or a total replacement.

Property and casualty insurers adapted to SERFF and the rise of other tech tools such as personal computing, word processors and spreadsheets. Yet adaptation has been slow. Today, many P&C insurers are still stuck in the document-and-spreadsheet phase of product development, requiring members of a product development team to review drafts manually and relying on human attention to detail to spot minor but essential changes.

The result? A product development process that looks remarkably similar to the process of the 1980s. The drafts and research have migrated from paper to screens, but teams must still meet physically or digitally, compare drafts by hand and make decisions — and the need to ensure no crucial detail is missed slows the product development process to a crawl.

See also: P&C Core Systems: Beyond the First Wave  

Cycle Solution No. 1: Better Systems

The technology exists to reduce the time spent in the development process. To date, however, many P&C insurers have been slow to adopt it.

Electronic product management systems streamline the process of product development. The “new-old” way of using email, spreadsheets and PDFs maintains the same walls and oversight difficulties as the “old-old” way of face to face meetings and snail mail.

In a system designed for product development, however, information is kept in a single location, automated algorithms can be used to scan for minute differences and to track changes and tracking and alerts keep everyone on schedule.

By eliminating barriers, these systems reduce the time required to create a P&C insurance product. They also help reduce errors and save mental bandwidth for team members, allowing them to focus on the salient details of the product rather than on keeping track of their own schedules and paperwork.

Cycle Slowdown No. 2: Differentiation and Specificity

Once upon a time, P&C insurers’ products competed primarily on price. As a result, there was little need to differentiate products from other products sold by the same insurer or from similar insurance products sold by competitors. During product development, insurers allowed differentiation to take a backseat to other issues.

“Prior to the mid-1990s,” Cognizant in a recent white paper notes, “insurance distributors held most of the knowledge regarding insurance products, pricing and processes — requiring customers to have the assistance of an intermediary.”

Today, however, customers know more than ever. They’re also more capable than ever of comparing P&C insurance products based on multiple factors, not only on price. That means insurance companies are now focusing on differentiation during product development — which adds time to the process required to bring an insurance product to market.

Cycle Solution No. 2: Automation

Automation tools can be employed during the product development cycle to provide better insight, track behavior to identify unfilled niches for products and lay the foundation for a strong product launch.

As Frank Memmo Jr. and Ryan Knopp note in ThinkAdvisor, omnichannel software solutions provide a number of customer-facing benefits. A system that gathers, stores and tracks customer data — and that communicates with a product management system — provides profound insights to its insurance company, as well. When automation is used to gather and analyze data, it can significantly shorten the time required to develop insurance products that respond to customers’ ever-changing needs.

“An enterprise-wide solution enables workflow-driven processes that ensure all participants in the process review and sign off where required,” Brian Abajah writes at Turnkey Africa. “Subsequently, there is reduction in product development costs and bottlenecks to result in improved speed-to-market and quality products as well as the ability to develop and modify products concurrently leading to increased revenue.”

The Future of Development: Takeaways for P&C Insurers

Insurtech has taken the lead in coordinating property and casualty insurers with the pace of modern digital life. It’s not surprising, for example, that Capgemini’s Top Ten Trends in Property & Casualty Insurance 2018 are all tech-related, from the use of analytics and advanced algorithms to track customer behavior to the ways that drones and automated vehicles change the way insurers think about and assess risk.

It’s also not surprising, then, that companies using technology from 1998 find themselves stuck in a 20th-century pace of product development — and, increasingly, with 20th-century products.

See also: How Not to Transform P&C Core Systems  

As a McKinsey white paper notes, the digital revolution in insurance not only has the potential to change the way in which insurance products are developed, but also to change the products themselves. Digital insurance coverages are on the rise, and demand is expected to increase as the first generation of digital natives begins to reach adulthood.

Alan Walker at Capgemini recently predicted that in the near future property and casualty insurance product development will become modular. “Modular design enables myriad new products to be developed quickly and easily,” Walker says.

It also allows insurers to respond more nimbly to customers’ demands for personalized coverage. And while the boardroom and paperwork approach to development is ill-equipped to handle modular products, many product development and management systems can adapt easily to such an approach.

“Insurance products embody each insurance company’s understanding of the future,” Donald Light, a director at Celent, wrote in 2006. “As an insurance company’s view of possible gains, losses, risks and opportunities change, its products must change.”

Twelve years later, Light’s words remain true. Not only must insurance company products change, but so must the processes by which companies envision, develop and edit those products.

Just as the fax machine and email changed insurance in previous decades, the rise of analytics and big data stand to revolutionize — and to speed up — the product development process.

Time for E-Signatures, Doc Management

If you want to know why insurance companies need electronic signatures and document management, you must first look at the regulatory landscape.

In the past 10 years, this climate has changed considerably, and most insurance companies are struggling to do one of two things to handle these changes: 1) make internal policies to comply with these changes without sacrificing profitability; and 2) find creative ways to outpace competitors looking for the same solutions to these problems.

Neither is an easy feat.

The National Association of Insurance Commissioners (NAIC) has even devoted a large portion of its industry report to addressing one of the myriad ways insurance companies are striving to transcend regulatory difficulties—through the efficiency of the internet.

This is a major reason why insurance companies need both electronic signatures and document management. Used separately, they are ineffective at delivering that the solutions insurance companies need. Together, their interplay makes navigating regulatory changes easy, especially those administered and upheld by the Federal Insurance Office (FIO) and NAIC.

Understanding E-Commerce and Insurance Sales Problems

Most states in the U.S. require those applying for insurance services over the internet to complete an electronic signature, whether it is used as a standalone technology or integrates with document management technologies. Although the approach may seem like common sense, its advent does away with the use of a witness or notary and brings into question the legitimacy of signatures.

See also: The Most Valuable Document That Money Can Buy  

Despite digital signatures being more efficient (after all, if e-signatures existed in 1776, all 56 U.S. delegates could’ve signed the document on the day our nation was founded; instead, it took roughly a month to collect all the signatures), they require additional authentications. This can be automated by document management tools.

Legitimizing Electronic Insurance Applications

ACORD, the Association for Cooperative Operations Research and Development, achieved this automation by making digital forms available on its domain. Application of electronic signature technology situated in document management solutions just needs to be applied during the final stages of the process.

Why the Need Is Paramount

Above all else, these are the features that create an effective interplay between document management technologies and electronic signatures.

Authentication Procedures

Inclusion of a KBA challenge question helps authenticate the digital signature process. This ensures that the party attempting to sign a document is who he or she says he or she is.

IP Address Verification

IP address verification is an extra layer that can bolster the legitimacy of a signed document if a legal dispute over its authenticity ever arises.

Form Fill Automation

There are new and exciting ways to automate the form fill process for recurring client-based and document related processes. Zonal OCR makes this possible, eliminating manual processes and reducing document workload to a bare minimum.

See also: E-Signatures: an Easy Tech Win  

Bar Code Authentication

Although a bar code authentication in an electronic signature should never be a standalone backup, it does add a layer of legitimacy. A bar code is a stamp of individuality that reveals its purpose and origins quite clearly.

Ensuring Data in Documents is Unaltered

It becomes obvious that electronic signatures are more useful if applied through document management technologies, as these technologies ensure documentation is not altered.

What’s more, the role-based user permissions of a document management system can trace who changed what within a system, ensuring that those who alter data without authorization can be held accountable for their actions.