Tag Archives: visa

Winning With Digital Confidence

Today, if there’s a problem with the heat or hot water in your hotel room, you call the front desk and wait for maintenance to arrive. At some chains, you have the option of reporting the issue using a mobile device. But in the near future, many hotel rooms will be wired with connected devices that report potential breakdowns to maintenance and may even automatically fix them. For example, smart-building technology will turn the heat up when your app’s locator notices you are on the way back to your room.

Of course, such developments have significant implications for hotel staff. George Corbin thinks about them from a scientific perspective. As the senior vice president of digital at Marriott, Corbin oversees Marriott.com and Marriott mobile, and he is responsible for about $13 billion of the company’s annual revenue. He says the “skills half-life” of a hotel industry worker is about 12 years, at least for those working in conventional areas such as sales, operations and finance. In other words, if people leave jobs in these functions, they could come back in 12 years and half their skills would still be relevant. But on the digital side, the skills half-life shrinks to a mere 18 months, according to Corbin.

Virtually every other industry faces similar dynamics. Digital competency is practically mandatory in many sectors; if you don’t get on board, you’ll fall behind competitors that do. And yet the knowledge required for widespread digital competency is often in short supply, and the related skills in agility and collaboration are often difficult to achieve in large companies. In a few years, an 18-month skills half-life may seem like a luxury. As a result, many executives’ confidence in their organization’s “Digital IQ” — their ability to harness digital-driven change to unlock value — is at an all-time low.

That’s one of the main findings from the 2017 edition of PwC’s Digital IQ survey. We interviewed more than 2,200 executives from 53 countries whose companies had annual revenues of at least $500 million and found that executive confidence had dropped a stunning 15 percentage points from the year before. These company leaders said they are no better equipped to handle the changes coming their way today than they were in 2007, when we first conducted this survey.

Back in 2007, being a digital company was often seen as synonymous with using information technology. Today, digital has come to mean having an organizational mindset that embraces constant innovation, flat decision making and the integration of technology into all phases of the business. This is a laudable change; however, in many companies, workforce skills and organizational capabilities have not kept pace. As the definition of digital has grown more expansive, company leaders have recognized that there exists a gap between the digital ideal and their digital reality.

See also: Digital Risk Profiling Transforms Insurance  

The ideal is an organization in which everyone has bought into the digital agenda and is capable of supporting it. What does this look like? It’s a company in which the workforce is tech-fluent, with a culture that encourages the kind of collaboration that supports the adoption of digital initiatives. The organizational structure and systems enable leaders to make discerning choices about where to invest in new technologies. The company applies its talent and capabilities to create the best possible user experiences for all of its customers and employees.

Simply upgrading your IT won’t get you there. Instead of spending indiscriminately, start by identifying a tangible business goal that addresses a problem that cannot be addressed with existing technology or past techniques. Then develop the talent, digital innovation capabilities and user experience to solve it. These three areas are where the new demands of digital competence are most evident. They are all equally important; choosing to focus on just one or two won’t be enough.

Our findings from 10 years of survey data suggest the organizations that can best unite talent, digital innovation capabilities and user experience into a seamless, integrated whole have a higher Digital IQ and are generally further along in their transformation. Our data also shows that the companies that use cross-functional teams and agile approaches, prioritize innovation with dedicated resources and better understand human experience, among other practices, have financial performance superior to that of their peers. It’s time for company leaders to build their digital confidence and their digital acumen; they can’t afford to wait.

Getting Tech-Savvy

“We are now moving into a world with this innovation explosion, where we need full-stack businesspeople,” says Vijay Sondhi, senior vice president of innovation and strategic partnerships at Visa, drawing an analogy to the so-called full-stack engineers who know technology at every level. “We need people who understand tech, who understand business, who understand strategy. Innovation is so broad-based and so well stitched together now that we’re being forced to become much better at multiple skill sets. That’s the only way we’re going to survive and thrive.”

In the past, digital talent could lie within the realm of specialists. Today, having a baseline of tech and design skills is a requirement for every employee. Yet overall digital skill levels have declined even further since our last report, published in 2015. Then, survey respondents said that skills in their organization were insufficient across a range of important areas, including cybersecurity and privacy, business development of new technologies and user experience and human-centered design. In fact, lack of properly skilled teams was cited this year as the No. 1 hurdle to achieving expected results from digital technology investments; 61% of respondents named it as an existing or emerging barrier. And 25% of respondents said they used external resources, even when they had skilled workers in-house, because it was too difficult or too slow to work with internal teams.

The skills gap is significant, and closing it will require senior leaders to commit to widespread training. They need to teach employees the skills to harness technology, which may include, for example, a new customer platform or an artificial intelligence-supported initiative. They will also need to cross-train workers to be conversant in disciplines outside their own, as well as in skills that can support innovation and collaboration, such as agile approaches or design thinking. Digital change, says Marriott’s Corbin, is driven by using technology in ways that empower human moments. “Rather than replace (human interactions), we are actually finding it’s improving them. We need the human touch to be powered by digital.”

One way that companies can accomplish these goals is by creating a cross-discipline group of specialists located in close proximity (we refer to this as a sandbox), whether physically or virtually, so each can observe how the others work. Such teams encourage interaction, collaboration, freedom and safety among a diverse group of individuals. Rather than working in isolation or only with peer groups, members develop a common working language that allows for the seamless collaboration and an increased efficiency vital to moving at the speed of technology. This approach avoids the typical workplace dysfunction that comes with breaking down silos: Because business issues are no longer isolated within one discipline but rather intertwined across many, colleagues from disparate parts of the organization are able to better understand one another and collaborate to come up with creative solutions.

Part product development and part project management, the sandbox approach enables your workforce to visualize the journey from conception to prototype to revelation in one continuous image, helping spread innovation throughout the organization. The culture of collaboration can speed the adoption of emerging technologies.

For example, this approach enabled the Make-A-Wish Foundation to bring employees together from across the organization, including some whose role in developing a new tech-based feature may not have been obvious, such as a tax expert and a lawyer. In just three months using this approach, the foundation created and operationalized a crowdfunding platform to benefit sick children.

Investing in the Future

At GE Healthcare, engineers are experimenting with augmented reality and assistant avatars. “Part of my job is to help pull in (great innovations) and apply them through a smart architecture,” says Jon Zimmerman, GE Healthcare’s general manager of value-based care solutions. “The innovations must be mobile native because … our job is to be able to serve people wherever they are. And that is going to include more and more sensors on bodies and, if you will, digital streaming so people can be monitored just as well as a jet engine can be monitored.”

Amid an increasingly crowded field of emerging technologies, companies need strong digital innovation capabilities to guide their decision making. Yet this achievement often proves challenging as a result of organizational and financial constraints. Our survey revealed that fewer companies today have a team dedicated to exploring emerging technologies than was the case in years past. Many are relying on ad hoc teams or outsourcing. Moreover, 49% of companies surveyed said they still determine their adoption of new technologies by evaluating the latest available tools, rather than by evaluating how the technology can meet a specific human or business need.

Equally troubling is that spending on emerging technologies is not much greater today, relative to overall digital technology budgets, than it was a decade ago. In 2007, the average investment in emerging technology was roughly 17% of technology budgets, a surprisingly robust figure at the time. Fast-forward 10 years, and that rate has grown to only about 18%, which may well be inadequate.

It’s time to change these trends.

You’ve identified a problem that existing technology cannot solve, but you shouldn’t just throw money at every shiny new thing. A digital innovation capability must become a central feature of any transformation effort. This approach goes beyond simply evaluating what to buy or where to invest to include how best to organize internal and external resources to find the emerging technologies that most closely match the direction and goals of the business.

Nearly every company is experimenting with what we call the “essential eight” new technologies: the internet of things (IoT), artificial intelligence (AI), robotics, drones, 3D printing, augmented reality (AR), virtual reality (VR) and blockchain. The key is to have a dedicated in-house team with an accountable, systematic approach to determining which of these technologies is critical to evolving the business digitally and which, ultimately, will end up as distractions that provide little value to the overall operation. This approach should include establishing a formal listening framework, learning the true impact of bleeding-edge technologies, sharing results from pilots and quickly scaling throughout the enterprise.

Perhaps most importantly, organizations need to have a certain tolerance for risk and failure when evaluating emerging technologies. Digital transformation requires organizations to be much more limber and rapid in their decision making. Says GE Healthcare’s Zimmerman, “One of our cultural pillars is to embrace constructive conflict. That means that when an organization transitions or transforms, things are going to be different tomorrow than they were yesterday. You must get comfortable with change and be open to the differing thoughts and diverse mind-sets that drive it.”

See also: Systematic Approach to Digital Strategy  

In a promising development, signs indicate that companies are starting to focus on bringing digital innovation capabilities in-house. According to the New York Times, investments by non-technology companies in technology startups grew to $125 billion in 2016, from just $20 billion five years ago. The Times, citing Bloomberg data, also noted that the number of technology companies sold to non-technology companies in 2016 surpassed intra-industry acquisitions for the first time since the internet era began. Walmart, General Motors, Unilever and others are among the non-technology giants that made startup acquisitions last year. General Electric, whose new tagline is, “The digital company. That’s also an industrial company,” spent $1.4 billion in September 2016 buying two 3D printing businesses in Europe.

Other companies are engaging in innovative partnerships. At the annual Consumer Electronics Show in January 2017, Visa, Honda and IPS Group — a developer of internet-enabled smart parking meters — teamed up to unveil a digital technology that lets drivers pay their parking meter tab via an app in the car’s dashboard. By “tokenizing” the car, or allowing it to provision and manage its own credit card credential, they essentially make it an IoT device on wheels. “The car becomes a payment device,” explains Visa’s Sondhi. “And taking it even further, we can turn it into a smart asset by publishing information that’s related to the car onto the blockchain. This can enable a whole host of tasks to be simplified and served up to the driver, such as pushing competitive insurance rates or automatically paying annual registration fees.”

Solving for “X”

At United Airlines, Ravi Simhambhatla, vice president of commercial technology and corporate systems, views digital innovation as a way to break free from habits ingrained in his company over nine decades because they are no longer relevant to its customers and employees. The company plans to use machine learning to create personalized experiences for its customers. For example, when someone books a flight to San Francisco, the company’s algorithm will know if that person is a basketball fan and, if so, offer Golden State Warriors tickets.

“What we have been doing is really looking at our customer and employee journeys with regard to the travel experience and figuring out how we can apply design thinking to those journeys,” says Simhambhatla. “And, as we map out these journeys, we are focused on imagining how, if we had a clean slate, we would build them today.”

With the right digital skills and capabilities comes great opportunity to improve the experience of both your employees and your customers. One constant that emerges from 10 years of Digital IQ surveys is that companies that focus on creating better user experiences report stronger financial performance. But, all too often, user experience is pushed to the back burner of digital priorities. Just 10% of respondents to this year’s survey ranked creating better customer experiences as their top priority, down from 25% a year ago. This imbalance between respondents’ focus on experience and its importance to both customers and employees has far-reaching effects. It creates problems in the marketplace, slows the assimilation of emerging technologies and hinders the ability of organizations to anticipate and adapt to change.

Part of the reason user experience ranks as such a low priority is the fact that CEOs and CIOs, the executives who most often drive digital transformation, are much less likely to be responsible for customer-facing services and applications than for digital strategy investments. As a result, they place a higher priority on revenue growth and increased profitability than on customer and employee experiences. However, user experience is also downgraded because getting it right is extremely difficult. It is expensive, outcome-focused as opposed to deadline-driven and fraught with friction.

However, unlike so many other aspects of technological change, how organizations shape the human experience is completely within their control. Companies need to connect the technology they are seeking to deploy and the behavior change they are looking to create.

Making this connection will only become more critical as emerging technologies such as IoT, AI and VR grow to define the next decade of digital. These — and other technologies that simultaneously embrace consumers, producers and suppliers — will amplify the impact of the distinct behaviors and expectations of these groups on an organization’s digital transformation.

Companies that focus too narrowly on small slivers of the customer experience will struggle to adapt, but overall experience-and-outcome companies that seamlessly handle multiple touch points across the customer journey will succeed. That’s because, when done right, the customer and employee experience translates great strategy, process and technology into something that solves a human or business need. You have the skills and the capabilities; now you need to think creatively about how to use them to improve the user experience in practical yet unexpected ways. Says United’s Simhambhatla, “To me, Digital IQ is all about finding sustainable technology solutions to remove the stress from an experience. This hinges on timely and contextually relevant information and being able to use technology to surprise and delight our customers and, equally, our employees.”

The Human Touch

When talent, innovation and experience come together, it changes the way your company operates. Your digital acumen informs what you do, and how you do it. For example, Visa realized back in 2014 that digital technology was changing not only its core business but also those of its partners so rapidly that it needed to bring its innovation capabilities in-house or risk being too dependent on external sources. It launched its first Innovation Center in 2014; the company now has eight such centers globally, and more are planned.

Visa’s Innovation Centers are designed as collaborative, co-creation facilities for the company and its clients. “The idea was that the pace of change was so fast that we couldn’t develop products and services in a vertically integrated silo. We want the Innovation Centers to be a place where our clients could come in, roll up their sleeves, work with us, and build solutions rapidly within our new, open network,” says Visa’s Sondhi. “The aim is to match the speed and simplicity of today’s social- and mobile-first worlds by ideating with clients to quickly deploy new products into the marketplace in weeks instead of months or quarters.”

See also: Huge Opportunity in Today’s Uncertainty  

Across industries, company leaders have clearly bought into the importance of digital transformation: Sixty-eight percent of our respondents said their CEO is a champion for digital, up from just one-third in 2007. That’s a positive development. But now executives need to move from being champions to leading a company of champions. Understanding what drives your customers’ and employees’ success and how your organization can apply digital technology to facilitate it with a flexible, sustainable approach to innovation will be the deeper meaning of Digital IQ in the next decade.

“It’s the blend that makes the magic,” says GE Healthcare’s Zimmerman. “It’s the high-impact technological innovations, plus the customer opportunities, plus the talent. You have to find a way to blend those things in a way that the markets can absorb, adopt, and gain value from in order to create a sustainable virtuous cycle.”

This article was written by Chris Curran and Tom Puthiyamadam.

parties

In Third Parties We (Mis)trust?

Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them.

Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives.

Mutual distributed ledgers pose a threat to the trust relationship in financial services.

The History of Trust

Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do.

In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged.

Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant.

People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:

  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”;
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.

A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong.

But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples.

The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:

 1960/’70s: Databases

The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic.

One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside.

1980/’90s: Networks

Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways.

Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf.

Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem.

Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government.

A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly.

But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as “natural monopolies.” Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise.

If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties.

2000/’10s: Mutual Distributed Ledgers

MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995.

Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:

  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.

If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents.

A cryptocurrency like bitcoin is an MDL with “mining on top.” The mining substitutes for trust: “proof of work” is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior.

However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay.

Boring ledgers meet smart contracts

MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, “distributed consensus,” rather than passing data around through messages.

In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured “currency” in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity.

Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers.

Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees.

A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.

card

Chip Cards Will Cut Cyber Fraud — for Now

Visa has released data showing adoption of Visa chip cards by U.S. banks and merchants is gathering steam.

But the capacity for Europay-Mastercard-Visa (EMV) chip cards to swiftly and drastically reduce payment card fraud in the U.S. is by no means assured.

Just look north to Canada, where EMV cards have been in wide use since 2011. Criminals have simply shifted fraudulent use of payment card accounts to online purchases—where the physical card does not come into play. Security and banking experts expect a similar pattern to play out in the U.S., where banks and merchants are under an October 2015 deadline, imposed by Visa and MasterCard, for adopting EMV systems.

Free resource: Putting effective data risk management within reach

Heeding that deadline, major retail chains and big banks are driving up adoption numbers in the U.S. However, thousands of small and mid-sized businesses continue to remain on the fence.

SMBs slower to switch

SMBs are methodically assessing the risk vs. reward of racing to adopt EMV, Brian Engle tells ThirdCertainty. Engle is executive director of the newly founded Retail Cyber Intelligence Sharing Center, or R-CISC.

Brian Engle, Retail Cyber Intelligence Sharing Center executive director
Brian Engle, Retail Cyber Intelligence Sharing Center executive director

 

Company decision-makers are doing their due diligence, factoring in the potential for fraud, the cost of implementing EMV technology and the risk of chargebacks, he says.

“From a transactional volume perspective, some are going to accept risks and move at a rate that’s more appropriate for the size of their organization,” Engle says.

There’s no question the U.S. is in EMV saturation mode. As of the end of 2015, Visa tells us:

  • The volume of chip transactions in the U.S. increased from $12.1 billion in November to $15.8 billion in December, a 30% pop.
  • Seven out of 10 Americans now have at least one chip card in their wallet.
  • 93% of consumers are aware that the transition to EMV is happening.

Cryptogram makes things more complicated

Unlike magnetic-stripe cards, EMV cards are more difficult to counterfeit because the chip contains a cryptogram. When the card is inserted into the point of sale (POS) terminal—vs. being swiped—the cryptogram creates a token that’s unique to each transaction, and all the information is encrypted as it’s transmitted to the terminal and the bank.

This process actually takes a few seconds, during which the consumer must leave her card inserted in the POS terminal. U.S consumers are in the process of modifying their behavior at the checkout stand. Patience for a few seconds is required. Those precious seconds of inconvenient waiting represent an investment in tighter security.

But not as tight as when you use a chip card in Canada or Europe. That’s because EMV cards not only generate a one-time authorization token, they are also designed to require the user to enter a PIN as a second factor of authentication. However, PIN compliance was not part of the October 2015 deadline. Thus, most EMV in-store transactions in the U.S. still require only a signature, which, of course, any impostor can forge.

Criminals, on the other hand, won’t be able to hack into store networks and steal any useful transactions data, at least not any in which chip cards were used.

“Even if you steal the information, it becomes very difficult to use it. You’d get a long string of letters and numbers that can’t do anything,” explains Ben Knieff, senior analyst for retail banking at Aite Group, an independent research and advisory firm that specializes in financial services.

Criminals reportedly were able to breach Wendy’s customer magnetic strip payment card data, recently. That data breach was disclosed after numerous stolen card numbers were subsequently used at other merchants, and the trail led back to Wendy’s.

This kind of credit card fraud is exactly why U.S. financial institutions are migrating from the magnetic-stripe cards to new technology that uses a much more secure chip.

Aite Group estimates that EMV will significantly reduce U.S. counterfeit card fraud—from an estimated peak of $3.61 billion in 2015 to $1.77 billion in 2018.

Scott Schober, Berkeley Varitronics Systems Inc. president and CEO
Scott Schober, Berkeley Varitronics Systems Inc. president and CEO

 

Even so, the technology is not foolproof because bad actors can use other tricks. “The EMV technology is still hackable,” says Scott Schober, president and CEO of Berkeley Varitronics Systems Inc., which specializes in wireless threat detection. “However, hackers are going to go after the simple hack.”

Identity theft experts anticipate that fraudsters will simply shift their attention to merchants that use mobile payments—or don’t use a physical POS terminal at all.

“For bad actors, when one avenue dries up, they will look for other ways,” says Numaan Huq, a Canada-based senior threat researcher with Trend Micro’s Forward-Looking Threat Research Team.

Some transactions safer than others

In Canada, where point-to-point encryption is now standard for retailers, Huq says he feels very safe when using a credit card in stores. But at places like hotels? Not so much.

That’s because hotels collect credit card information for reservations, and, when that system is hacked, all the data is compromised. The same goes for various service providers, like medical offices.

“Bad actors will find new avenues, and I expect, over time, the fraud levels (in the U.S.) will go up again,” Huq says.

That’s what happened in Canada, the U.K. and other countries that have adopted EMV. Canada, for example, saw a 54% decline in counterfeit cards and 133% jump in “card-not-present” (CNP) fraud between 2008 and 2013, according to Aite Group research.

“In the past, most of the tools hackers used were extremely crude,” Schober says. “But advances in technology are making it much easier to compromise people online.”

Aite estimates that CNP fraud in the U.S. will grow from $2.9 billion to $6.4 billion, as hackers shift their tactics.

But, Knieff says, criminals have one thing going against them—online credit card fraud is not a scalable “business.” Criminals can’t buy 40 TVs from Amazon.com, for example.

“Application fraud—using stolen or synthetic identities to open new accounts … becomes much more attractive,” he says. “Yes, CNP will increase, but it will not increase geometrically because it’s hard to scale.”

Many organizations may not even be ready to focus on securing their online systems. Engle, of R-CISC, uses a hockey analogy, saying retailers are “trying to skate to where the puck is going.” That is, at the moment they’re still trying to figure out the transition to EMV.

SMBs particularly vulnerable

In the meantime, smaller businesses face an increased risk.

“The fraudsters will utilize POS malware until they can’t, and those smaller retailers are going to continue to be in their cross-hairs,” he says. “The ability to affect small retailers at a high rate is very profitable for them.”

Attacks on large retailers take a lot more time and resources, Huq says.

“A small mom-and-pop shop is a no-brainer to hit,” he says, adding that mobile payments, especially, are a concern because of proliferation of malware, particularly for Android systems.

“It’s easy to use for small businesses because it costs less,” he says. “But in the future, I think this will be a new way for bad actors to steal credit card data.”

This post was written by Rodika Tollefson.

‘Safer’ Credit Cards Already Vulnerable

A recent Gallup survey found that 69% of Americans worry “frequently” or “occasionally” about having a credit card compromised by computer hackers. It’s not shocking. Consumers are becoming more educated on the topic, and financial institutions are beginning to do more to combat fraud, including introducing new types of credit cards. One example of the latter is chip-and-PIN technology, which everyone from consumers to the president has hailed for its ability to help prevent fraud. But is it the panacea that it’s been made out to be?

Let’s take a closer look at exactly what this technology entails. Unlike cards that use a magnetic stripe containing a user’s account information, chip cards implement an embedded microprocessor that contains the cardholder’s information in a way that renders it invisible even if hackers grab payment data while it is in transit between merchants and banks. The technology also generates unique information that is difficult to fake. There is a cryptogram that allows banks to see if the data flow has been modified and a counter that registers each sequential time the card is used (sort of like the numbers on a check), so that a would-be fraudster would have to guess the exact historical and dynamic transaction number for a charge to be approved.

Already used in every other G20 country as a more secure payment method, chip-and-PIN cards can be found on the consumer side of a global payment system known as EMV (short for Europay, MasterCard and Visa). The system will be rolled out in the U.S. in 2015, and many of us in the banking and data-security industries believe that it will stanch the flow of money lost to hackers while simultaneously cutting down on credit- and debit-card fraud.

MasterCard, Visa and American Express have already begun sending out chip cards to their American cardholders. The technology is expensive—the rollout of chip cards in the U.S. will cost an estimated $8 billion—and this cost may balloon exponentially if the implementation of the new technology is done incorrectly, as a recent spate of fraudulent charges using chip-and-PIN-based technology shows.

This recent trend is one early sign that chip-and-PIN may not be the cure-all many consumers were hoping for, at least during the rollout phase. According to Brian Krebs, during the past week, “at least three U.S. financial institutions reported receiving tens of thousands of dollars in fraudulent credit- and debit-card transactions coming from Brazil and hitting card accounts stolen in recent retail heists, principally cards compromised as part of the breach at Home Depot.”

The curious part about this spate of credit- and debit-card fraud is that fraudsters used account information pilfered from old-school magnetic stripe cards skimmed in that attack and ran them as EMV purchases in what’s called a “replay” attack. “After capturing traffic from a real EMV-based chip card transaction, the thieves could insert stolen card data into the transaction stream, while modifying the merchant and acquirer bank account on the fly,” Krebs reported. It sounds confusing, but the bottom line is money was stolen.

As with many scams, this particular evolution in the world of hacking for dollars cannot succeed without human error, which is probably the biggest liability in the coming chip card rollout. Krebs spoke with Avivah Litan, a fraud analyst with Gartner, who said, “It appears with these attacks that the crooks aren’t breaking the EMV protocol but taking advantage of bad implementations of it.” In a similar attack on Canadian banks a few months ago, one bank suffered a large loss because it was not checking the cryptogram and counter data, essential parts of the protocol.

As with all solutions in the realm of data-security, there is no such thing as a sure thing. Whether the hackers banked a false sense of security at the institutional level, knowing that the protocols might be deemed an unnecessary expense, or the recent attacks are merely part of the chip card learning curve, this latest technology is only as good as its implementation.

So, despite the best efforts of those in the financial services industry, the truth is I can’t blame anyone for worrying a bit about credit card fraud. The good news is that in almost all cases, the consumers aren’t responsible when they’ve been hit with fraud. The banks take care of it (though it can be trickier with debit cards, because money has actually left your account). These days, though, the reality is that you are your own first line of defense against fraudulent charges. That means pulling your credit reports at least once each year at AnnualCreditReport.com, monitoring your credit scores regularly for any sudden and unexplained changes (you can do that for free using free online tools, including those at Credit.com), keeping a close eye on your bank and credit card accounts daily and signing up for transactional monitoring programs offered by your financial institutions.

5 Steps for Covering Data Breaches

Target’s $19 million settlement with MasterCard[1] underscores very significant sources of potential exposure that often follow a data breach that involves payment cards. Retailers and other organizations that accept those cards are likely to face—in addition to a slew of claims from consumers and investors— claims from financial institutions that seek to recover losses associated with issuing replacement credit and debit cards, among other losses. The financial institution card issuers typically allege, among other things, negligence, breach of data-protection statutes and non-compliance with Payment Card Industry Data Security Standards (PCI DSS). Likewise, as Target’s recent settlement illustrates, organizations can expect to face claims from the payment brands, such as MasterCard, VISA and Discover, seeking substantial fines, penalties and assessments for purported PCI DSS non-compliance.

These potential sources of liability can eclipse others. While consumer lawsuits often get dismissed for lack of Article III standing,[2] for example, or may settle for relatively modest amounts,[3] the Target financial institution litigation survived a motion to dismiss[4] and involved a relatively high settlement amount as compared with the consumer litigation settlement. So did TJZ’s prior $24 million settlement with card issuers.[5] The current settlement involves only MasterCard,[6] moreover, and the Target financial institution litigation will proceed with any issuer of MasterCard-branded cards that declines to partake of the $19 million settlement offer. The amended class action in the Target cases alleges that the financial institutions’ losses “could eventually exceed $18 billion.”[7]

Organizations should be aware that these significant potential sources of data breach and payment brand liability may be covered by insurance, including commercial general liability insurance (CGL), which most companies have in place, and specialty cybersecurity/data privacy insurance.

Here are five steps for securing coverage for data breach and PCI DSS-related liability:

Step 1:            Look to CGL Coverage

                        Coverage A: “Property Damage” Coverage

Payment card issuers typically seek damages because of the necessity to replace cards and, often, also specifically allege damages because of the loss of use of those payment cards, including lost interest, transaction fees and the like. By way of illustration, the amended class action complaint in the Target litigation alleges:

The financial institutions that issued the debit and credit cards involved in Target’s data breach have suffered substantial losses as a result of Target’s failure to adequately protect its sensitive payment data. This includes sums associated with notifying customers of the data breach, reissuing debit and credit cards, reimbursing customers for fraudulent transactions, monitoring customer accounts to prevent fraudulent charges, addressing customer confusion and complaints, changing or canceling accounts and facing the decrease or suspension of their customers’ use of affected cards during the busiest shopping season of the year.[8]

The litigation further alleges that “plaintiffs and the FI [financial institution] class also lost interest and transaction fees (including interchange fees) as a result of decreased, or ceased, card usage in the wake of the Target data breach.”[9]

These allegations fall squarely within the standard-form definition of covered “property” damage under CGL Coverage A. Under Coverage A, the insurer commits to “pay those sums that the insured becomes legally obligated to pay as damages because of … ‘property damage’… caused by an ‘occurrence’”[10] that “occurs during the policy period.”[11] The insurer also has “the right and duty to defend the insured against any … civil proceeding in which damages because of … ‘property damage’ … are alleged.”[12]

Importantly, the key term “property damage” is defined to include not just “physical injury to tangible property” but also “loss of use of tangible property that is not physically injured.” The key definition in the current standard-form CGL insurance policy states as follows:

  1. “Property damage” means:
  2. Physical injury to tangible property, including all resulting loss of use of that property. All such loss of use shall be deemed to occur at the time of the physical injury that caused it; or
  3. Loss of use of tangible property that is not physically injured. All such loss of use shall be deemed to occur at the time of the “occurrence” that caused it.

For the purposes of this insurance, electronic data is not tangible property.

In this definition, “electronic data” means information, facts or programs stored as or on, created or used on or transmitted to or from computer software, including systems and applications software, hard or floppy disks, CD-ROMs, tapes, drives, cells, data processing devices or any other media that are used with electronically controlled equipment.[13]

Although the current definition states that “electronic data is not tangible property,” to the extent this standard-form language may be present in the specific policy at issue (coverage terms should not be assumed; rather the specific policy language at issue should always be carefully reviewed),[14] the limitation is largely, perhaps entirely, irrelevant in this context because card issuer complaints, like the amended class action complaint in the Target litigation, typically allege damages because of the need to replace physical, tangible payment cards.[15] The complaints further often expressly allege that the issuers have suffered damages because of a decrease or cessation in the card usage.

These types of allegations are squarely within the “property damage” coverage offered by CGL Coverage A, and courts have properly upheld coverage in privacy-related cases where allegations of loss of use of property are present.[16]

            Coverage B: “Personal and Advertising Injury” Coverage

There is significant potential coverage for data breach-related liability, including card issuer litigation, under CGL Coverage B. Under Coverage B, the insurer commits to “pay those sums that the insured becomes legally obligated to pay as damages because of ‘personal and advertising injury,’”[17] which is “caused by an offense arising out of [the insured’s] business … during the policy period.”[18] Similar to Coverage A, the policy further states that the insurer “will have the right and duty to defend the insured against any … civil proceeding in which damages because of … ‘personal and advertising injury’ to which this insurance applies are alleged.”[19]

The key term “personal and advertising injury” is defined to include a list of specifically enumerated offenses, which include “oral or written publication, in any manner, of material that violates a person’s right of privacy.”[20]

Considering this key language, courts have upheld coverage under CGL Coverage B for claims arising out of data breaches and for a wide variety of other claims alleging violations of privacy rights.[21] It warrants mention that, although the trial court in the Sony PlayStation data breach litigation recently ruled against coverage, the trial court’s decision — which turned on the court’s finding that, essentially, Coverage B is triggered only by purposeful actions by the insured (Sony) and not by the actions of the third parties who hacked into its network — that decision is currently on appeal to the New York Appellate Division and may soon be reversed. Nowhere in the insuring agreement or its key definition does the CGL policy require any action by the insured. As the coverage’s name “Commercial General Liability” indicates, the coverage does not require intentional action by the insured, as argued by the insurers in the Sony case, but rather is triggered by the insured’s liability, i.e., the insurer commits to pay sums that the insured “becomes legally obligated to pay” that “arise out of” the covered “offenses.” The broad insuring language, moreover, extends to the insured’s liability for publication “in any manner,” i.e., via a hacking attack or otherwise. The cases cited by the insurer in the Sony case are factually inapposite and interpret entirely different policy language. Indeed, Sony’s insurer, Zurich, itself acknowledged in 2009 that CGL policies may provide coverage for data breaches via hacking, which by definition involves third-party actions.[22]

Organizations also should be aware that the Insurance Services Office (ISO), the insurance industry organization responsible for drafting standard-form CGL language, recently promulgated a series of data breach exclusionary endorsements.[23] ISO acknowledged that there currently is data breach coverage for hacking activities under CGL policies. In particular, ISO stated that the new exclusions may be a “reduction in personal and advertising injury coverage”—the implication being that there is coverage in the absence of the new exclusions.

At the time the ISO CGL and CLU policies were developed, certain hacking activities or data breaches were not prevalent and, therefore, coverages related to the access to or disclosure of personal or confidential information and associated with such events were not necessarily contemplated under the policy. As the exposures to data breaches increased over time, stand-alone policies started to become available in the marketplace to provide certain coverage with respect to data breach and access to or disclosure of confidential or personal information.

To the extent that any access or disclosure of confidential or personal information results in an oral or written publication that violates a person’s right of privacy, this revision may be considered a reduction in personal and advertising injury coverage.[24]

Other than the trial court’s decision in the Sony case, no decision has held that an insured must itself publish information to obtain CGL Coverage B coverage, and a number of decisions have appropriately upheld coverage for liability that the insured has resulting from third-party publications.[25]

The bottom line: There may be very significant coverage under CGL policies, including for data breaches that result in the disclosure of personally identifiable information and other claims alleging violation of a right to privacy, including claims brought by card issuers.

Step 2:           Look to “Cyber” Coverage

Organizations are increasingly purchasing so-called “cyber” insurance, and a major component of the coverage offered under most “cyber” insurance policies is coverage for the spectrum of issues that an organization typically confronts in the wake of a data breach incident. This usually includes, not only defense and indemnity coverage in connection with consumer litigation and regulatory investigation, but also defense and indemnity coverage in connection with card issuer litigation. By way of example, one specimen policy insuring agreement states that the insurer will “pay … all loss” that the “insured is legally obligated to pay resulting from a claim alleging a security failure or a privacy event.” The key term “privacy event” includes “any failure to protect confidential information,” a term that is broadly defined to include “information from which an individual may be uniquely and reliably identified or contacted, including, without limitation, an individual’s name, address, telephone number, Social Security number, account relationships, account numbers, account balances, account histories and passwords.” “Loss” includes “compensatory damages, judgments, settlements, pre-judgment and post-judgment interest and defense costs.” Litigation brought by card issuers is squarely within the coverage afforded by the insuring agreement and its key definitions.

Importantly, a number of “cyber” insurance policies also expressly cover PCI DSS-related liability. By way of example, the specimen policy quoted above expressly defines covered “loss” to include “amounts payable in connection with a PCI-DSS Assessment,” which is defined as follows:

“PCI-DSS assessment” means any written demand received by an insured from a payment card association (e.g., MasterCard, Visa, American Express) or bank processing payment card transactions (i.e., an “acquiring bank”) for a monetary assessment (including a contractual fine or penalty) in connection with an insured’s non-compliance with PCI Data Security Standards that resulted in a security failure or privacy event.

This can be a very important coverage, given that, as the recent Target settlement illustrates, organizations face substantial liability arising out of the card brand and association claims for fines, penalties and assessments for purported non-compliance with PCI DSS. The payment card brands routinely claim that an organization was not PCI DSS-compliant and that the PCI forensic investigator assigned to investigate compliance routinely determines that the organization was not compliant at the time of a breach. As the payment industry has stated, “no compromised entity has yet been found to be in compliance with PCI DSS at the time of a breach.”[26]

The bottom line: “Cyber” insurance policies may provide broad, solid coverage for the costs and expenses that organizations may incur in connection with card-issuer litigation and payment brand claims alleging PCI non-compliance.

Step 3:            Look to Other Potential Coverage

It is important not to overlook other types of insurance policies that may respond to cover various types of exposure flowing from a breach. For example, there may be coverage under directors’ and officers’ (D&O) policies, professional liability or errors and omissions (E&O) policies and commercial crime policies. After a data breach, companies are advised to provide prompt notice under all potentially implicated policies, excepting in particular circumstances that may justify refraining to do so, and to carefully evaluate all potentially applicable coverages.

Step 4:            Don’t Take “No” For an Answer

Unfortunately, even where there is a legitimate claim for coverage under the policy language and applicable law, an insurer may deny a claim. Indeed, insurers can be expected to argue, as Sony’s insurers argued, that data breaches are not covered under CGL insurance policies. Nevertheless, insureds that refuse to take “no” for an answer may be able to secure valuable coverage.

If, for example, an insurer reflexively raises the “electronic data” exclusion in response to a claim under CGL Coverage A, which purports to exclude, under the standard form, “[d]amages arising out of the loss of, loss of use of, damage to, corruption of, inability to access or inability to manipulate electronic data,”[27] insureds are encouraged to point out that the damages alleged by card issuers for replacing physical cards and for lost interest and transaction fees, etc., resulting from loss of use of those cards, are clearly outside the purview of the exclusion. Likewise, if an insurer raises the standard “Recording And Distribution Of Material Or Information In Violation Of Law” exclusion, insureds are encouraged to point out that the exclusion has been narrowly interpreted, does not address common-law claims and has been held inapplicable where the law at issue fashions relief for common law rights.[28]

Importantly, exclusions and other limitations to coverage are construed narrowly against the insurer and in favor of coverage under well-established rules of insurance policy interpretation,[29] and the burden is on the insurer to demonstrate an exclusion’s applicability.[30]

Step 5:            Maximize Cover Across the Entire Insurance Portfolio

Various types of insurance policies may be triggered by a data breach, and the various triggered policies may carry different insurance limits, deductibles, retentions and other self-insurance features, together with various different and potentially conflicting provisions addressing, for example, other insurance, erosion of self-insurance and stacking of limits. For this reason, in addition to considering the scope of substantive coverage under an insured’s different policies, it is important to carefully consider the best strategy for pursing coverage in a manner that will maximize the potentially available coverage across the insured’s entire insurance portfolio. By way of example, if there is potentially overlapping CGL and “cyber” insurance coverage, remember that defense costs often do not erode CGL policy limits, and structure the coverage strategy accordingly.

When facing a data breach, companies should carefully consider the insurance coverage that may be available. Insurance is a valuable asset. Before a breach, companies should take the opportunity to carefully evaluate and address their risk profile, potential exposure, risk tolerance, sufficiency of their existing insurance coverage and the role of specialized cyber coverage. In considering that coverage, please note that there are many specialty “cyber” products on the market. Although many, if not most, of these policies purport to cover many of the same basic risks, including data breaches and other types of “cyber” and data privacy-related risk, the policies vary dramatically. It is important to carefully review policies for appropriate coverage prior to purchase and, in the event of a claim, to carefully review the scope of all potentially available coverage.

This article was first published in Law360.

 

[1] Target Strikes $19M Deal With MasterCard Over Data Breach, Law360 (April 15, 2015). The settlement is contingent upon at least 90% of the eligible MasterCard issuers accepting their alternative recovery offers by May 20.

[2] See, e.g., No Data Misuse? No Standing For Data Breach Plaintiffs, Law360 (April 24, 2014).

[3] Target Will Pay Consumers $10M To End Data Breach MDL, Law360, New York (March 19, 2015).

[4] See, e.g., Target Loses Bid to KO Banks’ Data Breach Litigation, Law360 (April 15, 2015).

[5] TJX Reaches $24M Deal With MasterCard Issuers, Law360 (April 2, 2008).

[6] The company is reported to be in similar negotiations with Visa.

[7] In re: Target Corporation Customer Data Security Breach Litigation, MDL No. 14-2522 (PAM/JJK) (D. Minn), at ¶ 87 (filed August 1, 2014).

[8] Id., ¶ 2 (emphasis added).

[9] Id., ¶ 86 (emphasis added).

[10] ISO Form CG 00 01 04 13 (2012), Section I, Coverage A, §1.a., §1.b.(1).

[11] Id., Section I, Coverage A, §1.b.(2).

[12] Id., Section I, Coverage A, §1.a.; Section V, §18.

[13] ISO Form CG 00 01 04 13 (2012), Section V, §17 (emphasis added).

[14] In the absence of such language, a number of courts have held that damaged or corrupted software or data is “tangible property” that can suffer “physical injury.” See, e.g., Retail Sys., Inc. v. CNA Ins. Co., 469 N.W.2d 735 (Minn. Ct. App. 1991); Centennial Ins. Co. v. Applied Health Care Sys., Inc., 710 F.2d 1288 (7th Cir. 1983) (California law); Computer Corner, Inc. v. Fireman’s Fund Ins. Co., No. CV97-10380 (2d Dist. Ct. N.M. May 24, 2000).

[15] See also Eyeblaster, Inc. v. Federal Ins. Co., 613 F.3d 797 (8th Cir. 2010).

[16] See, e.g., District of Illinois in Travelers Prop. Cas. Co. of America v DISH Network, LLC, 2014 WL 1217668 (C.D, Ill. Mar. 24, 2014); Columbia Cas. Co. v. HIAR Holding, L.L.C., 411 S.W.3d 258 (Mo. 2013).

[17] ISO Form CG 00 01 04 13 (2012), Section I, Coverage B, §1.a.

[18] Id., Section I, Coverage B, §1.b..

[19] Id.. Section I, Coverage B, §1.a.; Section V, §18.

[20] Id.. Section V, §14.e.

[21] See, e.g., Hartford Cas. Ins. Co. v. Corcino & Assocs,. 2013 WL 5687527 (C.D. Cal. Oct. 7, 2013).

[22] Zurich, Data security: A growing liability threat (2009), available at http://www.zurichna.com/NR/rdonlyres/23D619DB-AC59-42FF-9589-C0D6B160BE11/0/DOCold2DataSecurity082609.pdf (emphasis added).

[23] These new exclusions became effective in most states last May 2014. One of the exclusionary endorsements, titled “Exclusion – Access Or Disclosure Of Confidential Or Personal Information,” adds the following exclusion to the standard form policy:

This insurance does not apply to:

Access Or Disclosure Of Confidential Or Personal Information

“Personal and advertising injury” arising out of any access to or disclosure of any person’s or organization’s confidential or personal information, including patents, trade secrets, processing methods, customer lists, financial information, credit card information, health information or any other type of non public information.

CG 21 08 05 14 (2013). See also Coming To A CGL Policy Near You: Data Breach Exclusions, Law360 (April 23, 2014).

[24] ISO Commercial Lines Forms Filing CL-2013-0DBFR, at pp. 3, 7-8 (emphasis added).

[25] See, e.g., Hartford Cas. Ins. Co. v. Corcino & Assocs,. 2013 WL 5687527 (C.D. Cal. Oct. 7, 2013).

[26] Visa: Post-breach criticism of PCI standard misplaced (March 20, 2009), available at http://www.computerworld.com.au/article/296278/visa_post-breach_criticism_pci_standard_misplaced/

[27] CG 00 01 04 13 (2012), Section I, Coverage A, §2.p.

[28] See, e.g., Hartford Cas. Ins. Co. v. Corcino & Assocs,. 2013 WL 5687527 (C.D. Cal. Oct. 7, 2013). For example, in the Corcino case, the court upheld coverage for statutory damages arising out hospital data breach that compromised the confidential medical records of nearly 20,000 patients, notwithstanding an express exclusion for “personal and advertising Injury …. [a]rising out of the violation of a person’s right to privacy created by any state or federal act.” Corcino and numerous other decisions underscore that, notwithstanding a growing prevalence of exclusions purporting to limit coverage for data breach and other privacy related claims, there may yet be valuable privacy and data breach coverage under “traditional” or “legacy” policies that should not be overlooked.

[29] See, e.g., 2 Couch on Insurance § 22:31 (“the rule is that, such terms are strictly construed against the insurer where they are of uncertain import or reasonably susceptible of a double construction, or negate coverage provided elsewhere in the policy”).

[30] See, e.g., 17A Couch on Insurance § 254:12 (“The insurer bears the burden of proving the applicability of policy exclusions and limitations or other types of affirmative defenses”).