Tag Archives: database

How Blockchain Will Reorganize Society

“Reification” describes how people reorganize around new technologies. The Iron Age, Industrial Revolution, Information Revolution, etc., provide abundant evidence of the reification of society. Blockchain technology will likely have a similar impact.

The insurance industry is currently optimized for the existing collection of risk pools, data inputs and price signals. If people reorganize, so, too, would their risk exposures — perhaps toward positive risk, or perhaps not.

Understanding the implications of these emerging conditions requires a brief history lesson on database architecture.

In the early days of computer networks, machines that performed computations were connected with wires to other machines that stored data on some physical medium such as magnetic tape. Humans interacted with these machines by using finger symbols on a keyboard and changing reels of magnetic tape. These activities had very little to do with the computation actually being performed, yet they caused the drive toward reification. While we may not realize it, those same functions are still often performed today in one form or another every time we interact with a computer database.

Like the expression, “A fish has no word for water,” many activities that blockchain technology renders unnecessary remain difficult to identify.

See also: Can Blockchains Be Insured?  

Over time, databases became so incredibly useful that companies and institutions stored all of their data in proprietary silos where they could control access to financial records, product specs, trade secrets, personnel files, customer data, sales projections, etc. The database for an aircraft manufacturer was structured entirely differently than a coffee shop chain — or an insurance company. The specialized linkages that formed between the data and the operations became unique to the organization and, in many cases, proprietary. This was also convenient in sequestering people whose skills were adapted to a particular data structure. The purpose of management was to let nothing in or out of the database without appropriate permission. It has been widely written how institutions have become defined, or “reified,” by their data structures.

The problems with legacy databases became apparent when the need arose for one database to communicate directly with another database. This was impossible without human administration. With the advent of the internet and social media, widespread networking capability between computer nodes became exponentially more valuable, while the ability for computers to communicate with each other remained linear. While electrons moved at the speed of light, many systems remained limited to the speed of bureaucracy.

In the 1990s, organizations introduced legions of administrators, intermediaries and brokers to help databases communicate with each other. More recently, database engineers invented special interfaces (APIs) that allow, say, Amazon to provide access to parts of its database to wholesalers or partnered retailers. APIs allowed for a wave of innovation associated with the e-commerce movement and much more. However, even APIs had significant shortcomings with the more formal “titled” transactions.

In 2016, with all the APIs in the world, a real estate broker must still wrestle with several databases to complete a transaction. The broker must lead the buyer and seller around the multiple listing service database (MLS), coordinate a financial lenders database, adjust for property inspection database, secure via a property insurance database,  use an escrow service and title insurance database — all under strict government database regulation and their own corporate management database oversight. The agents must deliver all of these databases in relative unison to a single point in time to receive archaic ink “signature” and a time stamp. A small mountain of paper “papers” is then registered in public archives. And, still, the deal can still be reversed by a legal challenge. The process can take weeks or months with unnerving risk, cost frictions and price volatility.

“This is all very weird, only we’ve become accustomed to it” – Vinay Gupta

Unfortunately, as the value of data increases, so, too, are the incentives, probability and consequences of cheating, especially where the ability to cheat has been equally enhanced by new technology. Reified society then reacts by adding additional laws and regulations that may thwart innovation to a greater degree than the protection that those laws may provide. Today, asymmetric information, blanket legislation and selective enforcement are considered among the scourges of modern-day commerce. Keep in mind, this complexity STILL has very little to do with the actual thing that people are trying to accomplish.

See also: Why Insurers Caught the Blockchain Bug

What if we can get rid of all the complexity? What if we can eliminate the brokers and intermediaries; the bureaucracy and the administration; the noise and the friction; and the risk?

Actually, this is a popular idea that has been attempted throughout history in various forms of governance and marked by the willingness, ability and degree of control of information. Obviously, there are many methods for applying control (or not applying control); most lie on a spectrum between a fully centralized organizational system and a decentralized organizational system.

Blockchain technology would allow data sources to communicate securely with each other directly with no central authority, administration or brokers. The insurance industry needs to take this technology seriously to enhance society’s ability to organize their own risk pools — or the insurance industry risks irrelevance.

(Adapted from Insurance: The Highest and Best Use of Blockchain technology, July 2016 National Center for Insurance Policy and Research/National Association of Insurance Commissioners Newsletter)

cloud

Secret Sauce for New Business Models?

Insurance companies were built to bring stability to an unstable world.

So, why do factors such as market instability, technological upheaval and consumer pressure seem to throw so many insurers into panic? In many cases, insurers can simply point to their rigid foundations.

It didn’t take many California earthquakes to convince California builders that foundations would need to be built with flexibility in mind. In insurance, it won’t take many disruptive upheavals to teach businesses that current foundations are ripe for disaster. New foundations are needed to support a perpetually shifting business.

In Reinventing Insurance: Leveraging the Power of Data, Analytics and Cloud Core Systems, a Majesco white paper issued in cooperation with Elagy, we look closely at how fundamental changes in the insurance business can be met with a new view of insurance infrastructure. By assembling cloud components into a fully functional virtual infrastructure, insurers remove the lethargy and overhead that bogs down everything from data aggregation and analytics to testing and product development. The goal is to build an insurance enterprise that can capitalize on market opportunities.

Risk vs. Time

To assess potential cloud value, Majesco first looked at the relationship between insights and risk assessment and at how insights are traditionally gathered and used. Traditional risk assessment regards claims experience across time and population as the best kind of informant regarding risk within any particular insurance product. This kind of risk assessment is proven. Actuarial science has been honed. Insurers have become adept at long-term predictive capabilities, and regulations have kept consumers and insurers protected from failure through adequate margins of error.

The experience of time, however, has become the sticking point. To meet market demands, every insurance process has to be shortened. The new predictive fuel of data provided through real-time digital sources (as well as increasingly insightful technologies) can give insurers a much better view of risk in a much more appropriate timeframe. But even if they can gather data and assess the data quickly, they will, in most cases, still be held back by a product development and testing infrastructure that isn’t prepared to respond to fast-acting competitive pressure. The transparency that offers such promising opportunity is widely available to anyone, not just insurers, and it is highly coveted by agile, tech-savvy, entrepreneurial disrupters.

Competition vs. Time

Entrepreneurs love innovation and crave a new, marketable idea. They especially enjoy turning age-old processes on end, because these moments are often akin to striking gold. With technology’s rapid application of telematics, sensors, geolocation information and improved data management, nearly anyone can tap into the same data pools. Creative entrepreneurs, educated investors and innovative organizations are teaming up in a new kind of gold rush where rapid opportunity recognition will be met with rapid product development and relevant marketing. At a time when consumers seem to be susceptible to instant access product messages, disruptive companies will soon be feeding them instant-access products.

Once again, the development time of legacy platforms can’t offer a competitive solution to insurers. The foundation is now susceptible to cracking because of its inflexibility.

Legacy vs. Time

Insurers still maintain dozens of advantages in the industry, the first and the foremost being experience. All of today’s new data sources, new channel options and modern infrastructure possibilities have more promise in the hands of insurers than in the hands of non-insurance disrupters. Legacy systems, however, are restrictive. They aren’t plug and play. Most aren’t operating in a unified data environment with data consolidated and available across multiple databases. So, insurers’ opportunities will be found in a system built to fit the new insurance business and infrastructure model.

Majesco’s report discusses how insurers can align cloud solutions with business strategies to capitalize on new risks, new products and new markets. With data aggregation, for example, cloud solutions available through Majesco and data-partner Elagy are rewriting analytic- and decision-making processes. A cloud data solution can integrate claims experience with third-party data and newly available data sets to relieve the need for additional IT overhead.

A Satellite Office Approach

Small and medium-sized insurers, in particular, stand to gain through a reinvention of their operational model. Market drivers—such as agents’ lack of marketing insights, the availability of relevant data and the need for low-cost process efficiencies—make an excellent case for change. The hurdles are real, however. Many insurers don’t have the needed resources to take advantage of these opportunities, and they are constrained by technology and a lack of operational capability.

The ideal solution would be to transfer the whole pipeline to the cloud, migrating the enterprise infrastructure into a cloud-based infrastructure where partners and innovators can plug their solutions into a cloud-based core administration system.

In the real world, most insurers would be served by a better strategy. When companies in any industry hope to move to a new geographic region, they sometimes open a satellite office. The satellite office is the new footprint in the foreign territory. It’s the place where testing and acclimation happen, and its approach is somewhat analogous to what insurers can do when looking at cloud development.

Insurers will find excitement and freedom running a new and improved model alongside the old model. While the organization practices its newfound agility, it will maintain the stability of legacy systems for as long as they are needed or are practical. A cloud-based insurance platform will quickly bring the insurer to the realm of data-fueled experience and competitive advantage. Its new processes and capabilities will breathe fresh life into insurers that are ready for resilient foundations.

Demystifying “The Dark Web”

We often hear reference to the “deep” or “dark” web. What exactly is the deep or dark web? Is it as illicit and scary as it is portrayed in the media?

This article will provide a brief overview and explanation of different parts of the web and will discuss why you just might want to go there.

THE SURFACE WEB

The surface web or “Clearnet” is the part of the web that you are most familiar with. Information that passes through the surface web is not encrypted, and users’ movements can be tracked. The surface web is accessed by search engines like Google, Bing or Yahoo. These search engines rely on pages that contain links to find and identify content. Search engine companies were developed so that they can quickly index millions of web pages in a short time and to provide an easy way to find content on the web. However, because these search engines only search links, tons of content is being missed. For example, when a local newspaper publishes an article on its homepage, that article can likely be reached via a surface web search engine like Yahoo. However, days later when the article is no longer featured on the homepage, the article might be moved into the site’s archive format and, therefore, would not be reachable via the Yahoo search engine. The only way to reach the article would be through the search box on the local paper’s web page. At that time, the article has left the surface web and has entered the deep web. Let’s go there now…

THE DEEP WEB

The deep web is a subset of the Internet and is not indexed by the major search engines. Because the information is not indexed, you have to visit those web addresses directly and then search through their content. Deep web content can be found almost anytime you do a search directly in a website — for example, government databases and libraries contain huge amounts of deep web data. Why does the deep web exist? Simply because the Internet is too large for search engines to cover completely. Experts estimate that the deep web is 400 to 500 times the size of the surface web, accounting for more than 90% of the internet. Now let’s go deeper…

THE DARK WEB

The dark web or “darknet” is a subset of the deep web. The dark web refers to any web page that has been concealed because it has no inbound links, and it cannot be found by users or search engines unless you know the exact address. The dark web is used when you want to control access to a site or need privacy, or often because you are doing something illegal. Virtual private networks (VPNs) are examples of dark web sites that are hidden from public access unless you know the web address and have the correct log-in credentials.

One of the most common ways to access the dark web is through the Tor network. The Tor network can only be accessed with a special web browser, called the Tor browser. Tor stands for “ The onion router” and is referred to as “Onionland.” This “onion routing” was developed in the mid-1990s by a mathematician and computer scientists at the U.S. Naval Research Laboratory with the purpose of protecting U.S. intelligence communications online. This routing encrypts web traffic in layers and bounces it through random computers around the world. Each “bounce” encrypts the data before passing the data on to its next hop in the network. This prevents even those who control one of those computers in the chain from matching the traffic’s origin with its destination. Each server only moves that data to another server, preserving the anonymity of the sender.

Because of the anonymity associated with the Tor network and dark web, this portion of the Internet is most widely known for its illicit activities, and that is why the dark web has such a bad reputation (you might recall the infamous dark web site, Silk Road, an online marketplace and drug bazaar on the dark web). It is true that on the dark web you can buy things such as guns, drugs, pharmaceuticals, child porn, credit cards, medical identities and copyrighted materials. You can hire hackers to steal competitors’ secrets, launch a DDOS (distributed denial of service) attack on a rival, or hack your ex-girlfriend’s Facebook account. However, the dark web accounts for only about .01% of the web.

Some would say that the dark web has a bad rap, as not everything on the dark web is quite so “dark,” nefarious or illegal. Some communities that reside on the dark web are simply pro-privacy or anti-establishment. They want to function anonymously, without oversight, judgment or censorship. There are many legitimate uses for the dark web. People operating within closed, totalitarian societies can use the dark web to communicate with the outside world. Individuals can use the dark web news sites to obtain uncensored new stories from around the world or to connect to sites blocked by their local Internet providers or surface search engines. Sites are used by human rights groups and journalists to share information that could otherwise be tracked. The dark net allows users to publish web sites without the fear that the location of the site will be revealed (think political dissidents). Individuals also use the dark web for socially sensitive communications, such as chat rooms and web forums for sensitive political or personal topics.

Takeaway

Don’t be afraid – dive deeper!

Download the Tor browser at www.torproject.org and access the deep/dark web information you have been missing. Everything you do in the browser goes through the Tor network and doesn’t need any setup or configuration from you. That said, because your data goes through several relays, it can be slow, so you might experience a more sluggish Internet than usual. However, preserving your privacy might be worth the wait. If you are sick of mobile apps that are tracking you and sharing your information with advertisers, storing your search history, or figuring out your interests to serve you targeted ads, give the Tor browser a try.

What Is and What Isn’t a Blockchain?

I Block, Therefore I Chain?

What is, and what isn’t, a “blockchain”?  The Bitcoin cryptocurrency uses a data structure that I have often termed as part of a class of “mutual distributed ledgers.” Let me set out the terms as I understand them:

  • ledger – a record of transactions;
  • distributed – divided among several or many, in multiple locations;
  • mutual – shared in common, or owned by a community;
  • mutual distributed ledger (MDL) – a  record of transactions shared in common and stored in multiple locations;
  • mutual distributed ledger technology – a technology that provides an immutable record of transactions shared in common and stored in multiple locations.

Interestingly, the 2008 Satoshi Nakamoto paper that preceded the Jan. 1, 2009, launch of the Bitcoin protocol does not use the term “blockchain” or “block chain.” It does refer to “blocks.” It does refer to “chains.” It does refer to “blocks” being “chained” and also a “proof-of-work chain.” The paper’s conclusion echoes a MDL – “we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.” [Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, bitcoin.org (2008)]

I have been unable to find the person who coined the term “block chain” or “blockchain.” [Contributions welcome!] The term “blockchain” only makes it into Google Trends in March 2012, more than three years from the launch of the Bitcoin protocol.

Blockchain

And the tide may be turning. In July 2015, the States of Jersey issued a consultation document on regulation of virtual currencies and referred to “distributed ledger technology.” In January 2016, the U.K. Government Office of Science fixed on “distributed ledger technology,” as does the Financial Conduct Authority and the Bank of England. Etymological evolution is not over.

Ledger Challenge

Wuz we first? Back in 1995, our firm, Z/Yen, faced a technical problem. We were building a highly secure case management system that would be used in the field by case officers on personal computers. Case officers would enter confidential details on the development and progress of their work. We needed to run a large concurrent database over numerous machines. We could not count on case officers out on the road dialing in or using Internet connections. Given the highly sensitive nature of the cases, security was paramount, and we couldn’t even trust the case officers overly much, so a full audit trail was required.

We took advantage of our clients’ “four eyes” policy. Case officers worked on all cases together with someone else, and not on all cases with the same person. Case officers had to jointly agree on a final version of a case file. We could count on them (mostly) running into sufficient other case officers over a reasonable period and using their encounters to transmit data on all cases. So we built a decentralized system where every computer had a copy of everything, but encrypted so case officers could only view their own work, oblivious to the many other records on their machines. When case officers met each other, their machines would “openly” swap their joint files over a cable or floppy disk but “confidentially” swap everyone else’s encrypted files behind the scenes, too. Even back at headquarters, four servers treated each other as peers rather than having a master central database. If a case officer failed to “bump into” enough people, then he or she would be called and asked to dial in or meet someone or drop by headquarters to synchronize.  This was, in practice, rarely required.

We called these decentralized chains “data stacks.” We encrypted all of the files on the machines, permitting case officers to share keys only for their shared cases. We encrypted a hash of every record within each subsequent record, a process we called “sleeving.” We wound up with a highly successful system that had a continuous chain of sequentially encrypted records across multiple machines treating each other as peers. We had some problems with synchronizing a concurrent database, but they were surmounted.

Around the time of our work, there were other attempts to do similar highly secure distributed transaction databases, e.g. Ian Griggs and Ricardo on payments, Stanford University and LOCKSS and CLOCKSS for academic archiving. Some people might point out that we weren’t probably truly peer-to-peer, reserving that accolade for Gnutella in 2000. Whatever. We may have been bright, perhaps even first, but were not alone.

Good or Bad Databases?

In a strict sense, MDLs are bad databases. They wastefully store information about every single alteration or addition and never delete.

In another sense, MDLs are great databases. In a world of connectivity and cheap storage, it can be a good engineering choice to record everything “forever.” MDLs make great central databases, logically central but physically distributed. This means that they eliminate a lot of messaging. Rather than sending you a file to edit, which you edit, sending back a copy to me, then sending a further copy on to someone else for more processing, all of us can access a central copy with a full audit trail of all changes. The more people involved in the messaging, the more mutual the participation, the more efficient this approach becomes.

Trillions of Choices

Perhaps the most significant announcement of 2015 was in January from IBM and Samsung. They announced their intention to work together on mutual distributed ledgers (aka blockchain technology) for the Internet-of Things. ADEPT (Autonomous Decentralized Peer-to-Peer Telemetry) is a jointly developed system for distributed networks of devices.

In summer 2015, a North American energy insurer raised an interesting problem with us. It was looking at insuring U.S. energy companies about to offer reduced electricity rates to clients that allowed them to turn appliances on and off — for example, a freezer. Now, freezers in America can hold substantial and valuable quantities of foodstuffs, often several thousand dollars. Obviously, the insurer was worried about correctly pricing a policy for the electricity firm in case there was some enormous cyber-attack or network disturbance.

Imagine coming home to find your freezer off and several thousands of dollars of thawed mush inside. You ring your home and contents insurer, which notes that you have one of those new-fangled electricity contracts: The fault probably lies with the electricity company; go claim from them. You ring the electricity company. In a fit of customer service, the company denies having anything to do with turning off your freezer; if anything, it was probably the freezer manufacturer that is at fault. The freezer manufacturer knows for a fact that there is nothing wrong except that you and the electricity company must have installed things improperly. Of course, the other parties think, you may not be all you seem to be. Perhaps you unplugged the freezer to vacuum your house and forgot to reconnect things. Perhaps you were a bit tight on funds and thought you could turn your frozen food into “liquid assets.”

I believe IBM and Samsung foresee, correctly, 10 billion people with hundreds of ledgers each, a trillion distributed ledgers. My freezer-electricity-control-ledger, my entertainment system, home security system, heating-and-cooling systems, telephone, autonomous automobile, local area network, etc. In the future, machines will make decisions and send buy-and-sell signals to each other that have large financial consequences. Somewhat coyly, we pointed out to our North American insurer that it should perhaps be telling the electricity company which freezers to shut off first, starting with the ones with low-value contents.

A trillion or so ledgers will not run through a single one. The idea behind cryptocurrencies is “permissionless” participation — any of the billions of people on the planet can participate. Another way of looking at this is that all of the billions of people on the planet are “permissioned” to participate in the Bitcoin protocol for payments. The problem is that they will not be continuous participants. They will dip in and out.

Some obvious implementation choices are: public vs. private? Is reading the ledger open to all or just to defined members of a limited community? Permissioned vs. permissionless? Are only people with permission allowed to add transactions, or can anyone attempt to add a transaction? True peer-to-peer or merely decentralized? Are all nodes equal and performing the same tasks, or do some nodes have more power and additional tasks?

People also need to decide if they want to use an existing ledger service (e.g. Bitcoin, Ethereum, Ripple), copy a ledger off-the-shelf, or build their own. Building your own is not easy, but it’s not impossible. People have enough trouble implementing a single database, so a welter of distributed databases is more complex, sure. However, if my firm can implement a couple of hundred with numerous variations, then it is not impossible for others.

The Coin Is Not the Chain

Another sticking point of terminology is adding transactions. There are numerous validation mechanisms for authorizing new transactions, e.g. proof-of-work, proof-of-stake, consensus or identity mechanisms. I divide these into “proof-of-work,”  i.e. “mining,” and consider all others various forms of “voting” to agree. Sometimes, one person has all the votes. Sometimes, a group does. Sometimes, more complicated voting structures are built to reflect the power and economic environment in which the MDL operates. As Stalin said, “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this — who will count the votes, and how.”

As the various definitions above show, the blockchain is the data structure, the mechanism for recording transactions, not the mechanism for authorizing new transactions. So the taxonomy starts with an MDL or shared ledger; one kind of MDL is a permissionless shared ledger, and one form of permissionless shared ledger is a blockchain.

Last year, Z/Yen created a timestamping service, MetroGnomo, with the States of Alderney. We used a mutual distributed ledger technology, i.e. a technology that provides an immutable record of transactions shared in common and stored in multiple locations. However, we did not use “mining” to authorize new transactions. Because the incentive to cheat appears irrelevant here, we used an approach called “agnostic woven” broadcasting from “transmitters” to “receivers” — to paraphrase Douglas Hofstadter, we created an Eternal Golden Braid.

So is MetroGnomo based on a blockchain? I say that MetroGnomo uses a MDL, part of a wider family that includes the Bitcoin blockchain along with others that claim use technologies similar to the Bitcoin blockchain. I believe that the mechanism for adding new transactions is novel (probably). For me, it is a moot point if we “block” a group of transactions or write them out singly (blocksize = 1).

Yes, I struggle with “blockchain.” When people talk to me about blockchain, it’s as if they’re trying to talk about databases yet keep referring to “The Ingres” or “The Oracle.” They presume the technological solution, “I think I need an Oracle” (sic), before specifying the generic technology, “I think I need a database.” Yet I also struggle with MDL. It may be strictly correct, but it is long and boring. Blockchain, or even “chains” or “ChainZ” is cuter.

We have tested alternative terms such as “replicated authoritative immutable ledger,” “persistent, pervasive,and permanent ledger” and even the louche “consensual ledger.” My favorite might be ChainLedgers. Or Distributed ChainLedgers. Or LedgerChains. Who cares about strict correctness? Let’s try to work harder on a common term. All suggestions welcome!

Unclaimed Funds Can Lead to Data Breaches

When it comes to privacy, not all states are alike. This was confirmed yet again in the 50 State Compendium of Unclaimed Property Practices we compiled. The compendium ranks the amount of personal data that state treasuries expose during the process by which individuals can collect unclaimed funds. The data exposed can provide fraudsters with a crime exacta: claiming money that no one will ever miss and gathering various nuggets of personal data that can help facilitate other types of identity theft. The takeaway: Some states provide way too much data to anyone who is in the business of exploiting consumer information.

For those who take their privacy seriously, the baseline of our compendium—inclusion in a list of people with unclaimed funds or property—may in itself be unacceptable. For others, finding their name on an unclaimed property list isn’t a huge deal. In fact, two people on our team found unclaimed property in the New York database (I was one of them) while putting together the 50-state compendium, and there were no panic attacks.

Free IDT911 white paper: Breach, Privacy and Cyber Coverages: Fact and Fiction

That said, there is a reason to feel uncomfortable—or even outright concerned—to find your name on a list of people with unclaimed property. After all, you didn’t give anyone permission to put it there. The way a person manages her affairs (or doesn’t) should not be searchable on a public database like a scarlet letter just waiting to be publicized.

Then there’s the more practical reason that it matters. Identity thieves rely on sloppiness. Scams thrive where there is a lack of vigilance (lamentably, a lifestyle choice for many Americans despite the rise of identity-related crimes). The crux of the problem when it comes to reporting unclaimed property: It’s impossible to be guarded and careful about something you don’t even know exists, and, of course, it’s much easier to steal something if you know that it does.

The worst of the state unclaimed property databases provide a target-rich environment for thieves interested in grabbing the more than $58 billion in unclaimed funds held by agencies at the state level across the country.

States’ response to questions about public database

When we asked for comment from the eight states that received the worst rating in our compendium—California, Hawaii, Indiana, Iowa, Nevada, South Dakota, Texas and Wisconsin—five replied. In an effort to continue the dialogue around this all-too-important topic, here are a few of the responses from the states:

— California said: “The California state controller has a fraud detection unit that takes proactive measures to ensure property is returned to the rightful owners. We have no evidence that the limited online information leads to fraud.”

The “limited online information” available to the public on the California database provides name, street addresses, the company that held the unclaimed funds and the exact amount owed unless the property is something with a movable valuation like equity or commodities. To give just one example, we found a $50 credit at Tiffany associated with a very public figure. We were able to verify it because the address listed in the California database had been referenced in a New York Times article about the person of interest. Just those data points could be used by a scammer to trick Tiffany or the owner of the unclaimed property (or the owner’s representatives) into handing over more information (to be used elsewhere in the commission of fraud) or money (a finder’s fee is a common ruse) or both.

This policy seems somewhat at odds with California’s well-earned reputation as one of the most consumer-friendly states in the nation when it comes to data privacy and security.

— Hawaii’s response: “We carefully evaluated the amount and type of information to be provided and consulted with our legal counsel to ensure that no sensitive personal information was being provided.”

My response: Define “sensitive.” These days, name, address and email address (reflect upon the millions of these that are “out there” in the wake of the Target and Home Depot breaches) are all scammers need to start exploiting your identity. The more information they have, the more opportunities they can create, leveraging that information, to get more until they have enough to access your available credit or financial accounts.

— Indiana’s response was thoughtful. “By providing the public record, initially we are hoping to eliminate the use of a finder, which can charge up to 10% of the property amount. Providing the claimant the information up front, they are more likely to use our service for free. That being said, we are highly aware of the fraud issue and, as you may know, Indiana is the only state in which the Unclaimed Property Division falls under the Attorney General’s office. This works to our advantage in that we have an entire investigative division in-house and specific to unclaimed property. In addition, we also have a proactive team that works to reach out to rightful owners directly on higher-dollar claims to reduce fraud and to ensure those large dollar amounts are reaching the rightful owners.”

Protect and serve should be the goal

While Indiana has the right idea, the state still provides too much information. The concept here is to protect and serve—something the current system of unclaimed property databases currently does not do.

The methodology used in the compendium was quite simple: The less information a state provided, the better its ranking. Four stars was the best rating—it went to states that provided only a name and city or ZIP code—and one star was the worst, awarded to states that disclosed name, street address, property type, property holder and exact amount owed.

In the majority of states in the U.S., the current approach to unclaimed funds doesn’t appear to be calibrated to protect consumers during this ever-growing epidemic of identity theft and cyber fraud. The hit parade of data breaches over the past few years—Target, Home Depot, Sony Pictures, Anthem and, most recently, the Office of Personnel Management—provides a case-by-case view of the evolution of cybercrime. Whether access was achieved by malware embedded in a spear-phishing email or came by way of an intentionally infected vendor, the ingenuity of fraudsters continues apace, and it doesn’t apply solely to mega databases. Identity thieves make a living looking for exploitable mistakes. The 50 State Compendium provides a state-by-state look at mistakes just waiting to be converted by fraudsters into crimes.

The best way to keep your name off those lists: Stay on top of your finances, cash your checks and keep tabs on your assets. (And check your credit reports regularly to spot signs of identity fraud. You can get your free credit reports every year from the major credit reporting agencies, and you can get a free credit report summary from Credit.com every month for a more frequent overview.) In the meantime, states need to re-evaluate the best practices for getting unclaimed funds to consumers. One possibility may be to create a search process that can only be initiated by the consumer submitting his name and city (or cities) on a secure government website.